Feb 12 20:22:11.048692 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:22:11.048715 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:22:11.048728 kernel: BIOS-provided physical RAM map: Feb 12 20:22:11.048735 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:22:11.048742 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:22:11.048750 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:22:11.048758 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 12 20:22:11.048765 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 12 20:22:11.048774 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:22:11.048781 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:22:11.048788 kernel: NX (Execute Disable) protection: active Feb 12 20:22:11.048795 kernel: SMBIOS 2.8 present. Feb 12 20:22:11.048802 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 12 20:22:11.048809 kernel: Hypervisor detected: KVM Feb 12 20:22:11.048818 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:22:11.048828 kernel: kvm-clock: cpu 0, msr 28faa001, primary cpu clock Feb 12 20:22:11.048835 kernel: kvm-clock: using sched offset of 5087665573 cycles Feb 12 20:22:11.048843 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:22:11.048851 kernel: tsc: Detected 1996.249 MHz processor Feb 12 20:22:11.048859 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:22:11.048868 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:22:11.048876 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 12 20:22:11.048884 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:22:11.048893 kernel: ACPI: Early table checksum verification disabled Feb 12 20:22:11.048901 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 12 20:22:11.048909 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:22:11.048917 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:22:11.048925 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:22:11.048932 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 20:22:11.048940 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:22:11.048948 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:22:11.048956 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 12 20:22:11.048966 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 12 20:22:11.048974 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 20:22:11.048982 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 12 20:22:11.056033 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 12 20:22:11.056043 kernel: No NUMA configuration found Feb 12 20:22:11.056051 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 12 20:22:11.056060 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 12 20:22:11.056069 kernel: Zone ranges: Feb 12 20:22:11.056084 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:22:11.056093 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 12 20:22:11.056101 kernel: Normal empty Feb 12 20:22:11.056109 kernel: Movable zone start for each node Feb 12 20:22:11.056118 kernel: Early memory node ranges Feb 12 20:22:11.056126 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:22:11.056136 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 12 20:22:11.056144 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 12 20:22:11.056152 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:22:11.056160 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:22:11.056168 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 12 20:22:11.056176 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:22:11.056184 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:22:11.056193 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:22:11.056201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:22:11.056210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:22:11.056219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:22:11.056227 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:22:11.056235 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:22:11.056243 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:22:11.056252 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 20:22:11.056260 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 20:22:11.056268 kernel: Booting paravirtualized kernel on KVM Feb 12 20:22:11.056276 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:22:11.056285 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 20:22:11.056295 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 20:22:11.056303 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 20:22:11.056311 kernel: pcpu-alloc: [0] 0 1 Feb 12 20:22:11.056319 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 20:22:11.056327 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 20:22:11.056335 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 12 20:22:11.056343 kernel: Policy zone: DMA32 Feb 12 20:22:11.056353 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:22:11.056364 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:22:11.056373 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:22:11.056381 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 20:22:11.056389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:22:11.056398 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 20:22:11.056406 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:22:11.056415 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:22:11.056423 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:22:11.056432 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:22:11.056441 kernel: rcu: RCU event tracing is enabled. Feb 12 20:22:11.056449 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:22:11.056458 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:22:11.056466 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:22:11.056474 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:22:11.056482 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:22:11.056490 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 20:22:11.056498 kernel: Console: colour VGA+ 80x25 Feb 12 20:22:11.056509 kernel: printk: console [tty0] enabled Feb 12 20:22:11.056517 kernel: printk: console [ttyS0] enabled Feb 12 20:22:11.056525 kernel: ACPI: Core revision 20210730 Feb 12 20:22:11.056533 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:22:11.056542 kernel: x2apic enabled Feb 12 20:22:11.056550 kernel: Switched APIC routing to physical x2apic. Feb 12 20:22:11.056558 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:22:11.056566 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:22:11.056574 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 12 20:22:11.056583 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 20:22:11.056593 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 20:22:11.056602 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:22:11.056610 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:22:11.056618 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:22:11.056626 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:22:11.056634 kernel: Speculative Store Bypass: Vulnerable Feb 12 20:22:11.056642 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 12 20:22:11.056650 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:22:11.056659 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:22:11.056669 kernel: LSM: Security Framework initializing Feb 12 20:22:11.056677 kernel: SELinux: Initializing. Feb 12 20:22:11.056685 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:22:11.056693 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:22:11.056702 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 12 20:22:11.056710 kernel: Performance Events: AMD PMU driver. Feb 12 20:22:11.056718 kernel: ... version: 0 Feb 12 20:22:11.056726 kernel: ... bit width: 48 Feb 12 20:22:11.056735 kernel: ... generic registers: 4 Feb 12 20:22:11.056752 kernel: ... value mask: 0000ffffffffffff Feb 12 20:22:11.056760 kernel: ... max period: 00007fffffffffff Feb 12 20:22:11.056772 kernel: ... fixed-purpose events: 0 Feb 12 20:22:11.056780 kernel: ... event mask: 000000000000000f Feb 12 20:22:11.056789 kernel: signal: max sigframe size: 1440 Feb 12 20:22:11.056797 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:22:11.056806 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:22:11.056814 kernel: x86: Booting SMP configuration: Feb 12 20:22:11.056825 kernel: .... node #0, CPUs: #1 Feb 12 20:22:11.056833 kernel: kvm-clock: cpu 1, msr 28faa041, secondary cpu clock Feb 12 20:22:11.056842 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 20:22:11.056851 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:22:11.056859 kernel: smpboot: Max logical packages: 2 Feb 12 20:22:11.056867 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 12 20:22:11.056876 kernel: devtmpfs: initialized Feb 12 20:22:11.056884 kernel: x86/mm: Memory block size: 128MB Feb 12 20:22:11.056893 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:22:11.056905 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:22:11.056913 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:22:11.056922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:22:11.056930 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:22:11.056939 kernel: audit: type=2000 audit(1707769330.327:1): state=initialized audit_enabled=0 res=1 Feb 12 20:22:11.056947 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:22:11.056956 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:22:11.056964 kernel: cpuidle: using governor menu Feb 12 20:22:11.056973 kernel: ACPI: bus type PCI registered Feb 12 20:22:11.057000 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:22:11.057010 kernel: dca service started, version 1.12.1 Feb 12 20:22:11.057018 kernel: PCI: Using configuration type 1 for base access Feb 12 20:22:11.057027 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:22:11.057036 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:22:11.057044 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:22:11.060438 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:22:11.060618 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:22:11.060628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:22:11.060641 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:22:11.060650 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:22:11.060659 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:22:11.060668 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:22:11.060676 kernel: ACPI: Interpreter enabled Feb 12 20:22:11.060685 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:22:11.060694 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:22:11.060703 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:22:11.060711 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:22:11.060724 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:22:11.060897 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:22:11.061012 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 20:22:11.061028 kernel: acpiphp: Slot [3] registered Feb 12 20:22:11.061037 kernel: acpiphp: Slot [4] registered Feb 12 20:22:11.061045 kernel: acpiphp: Slot [5] registered Feb 12 20:22:11.061054 kernel: acpiphp: Slot [6] registered Feb 12 20:22:11.061065 kernel: acpiphp: Slot [7] registered Feb 12 20:22:11.061074 kernel: acpiphp: Slot [8] registered Feb 12 20:22:11.061083 kernel: acpiphp: Slot [9] registered Feb 12 20:22:11.061091 kernel: acpiphp: Slot [10] registered Feb 12 20:22:11.061100 kernel: acpiphp: Slot [11] registered Feb 12 20:22:11.061108 kernel: acpiphp: Slot [12] registered Feb 12 20:22:11.061116 kernel: acpiphp: Slot [13] registered Feb 12 20:22:11.061126 kernel: acpiphp: Slot [14] registered Feb 12 20:22:11.061135 kernel: acpiphp: Slot [15] registered Feb 12 20:22:11.061143 kernel: acpiphp: Slot [16] registered Feb 12 20:22:11.061154 kernel: acpiphp: Slot [17] registered Feb 12 20:22:11.061162 kernel: acpiphp: Slot [18] registered Feb 12 20:22:11.061171 kernel: acpiphp: Slot [19] registered Feb 12 20:22:11.061179 kernel: acpiphp: Slot [20] registered Feb 12 20:22:11.061188 kernel: acpiphp: Slot [21] registered Feb 12 20:22:11.061196 kernel: acpiphp: Slot [22] registered Feb 12 20:22:11.061205 kernel: acpiphp: Slot [23] registered Feb 12 20:22:11.061213 kernel: acpiphp: Slot [24] registered Feb 12 20:22:11.061222 kernel: acpiphp: Slot [25] registered Feb 12 20:22:11.061233 kernel: acpiphp: Slot [26] registered Feb 12 20:22:11.061241 kernel: acpiphp: Slot [27] registered Feb 12 20:22:11.061250 kernel: acpiphp: Slot [28] registered Feb 12 20:22:11.061258 kernel: acpiphp: Slot [29] registered Feb 12 20:22:11.061266 kernel: acpiphp: Slot [30] registered Feb 12 20:22:11.061275 kernel: acpiphp: Slot [31] registered Feb 12 20:22:11.061283 kernel: PCI host bridge to bus 0000:00 Feb 12 20:22:11.061385 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:22:11.061467 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:22:11.061552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:22:11.061629 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 20:22:11.061704 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:22:11.061780 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:22:11.061883 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:22:11.061979 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:22:11.062103 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:22:11.062194 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 12 20:22:11.062283 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:22:11.062369 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:22:11.062464 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:22:11.062553 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:22:11.062651 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:22:11.062744 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:22:11.062830 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:22:11.062949 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 20:22:11.063060 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 20:22:11.063145 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 20:22:11.063224 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 12 20:22:11.063310 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 12 20:22:11.063390 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:22:11.063488 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:22:11.063572 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 12 20:22:11.063654 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 12 20:22:11.063802 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 20:22:11.063890 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 12 20:22:11.064019 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:22:11.064417 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:22:11.065775 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 12 20:22:11.065874 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 20:22:11.066010 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 20:22:11.066117 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 12 20:22:11.066211 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 20:22:11.066370 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:22:11.066483 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 12 20:22:11.066596 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 20:22:11.066611 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:22:11.066621 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:22:11.066631 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:22:11.066641 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:22:11.066651 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:22:11.066666 kernel: iommu: Default domain type: Translated Feb 12 20:22:11.066676 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:22:11.066782 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:22:11.066884 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:22:11.066982 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:22:11.067120 kernel: vgaarb: loaded Feb 12 20:22:11.067130 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:22:11.067140 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:22:11.067150 kernel: PTP clock support registered Feb 12 20:22:11.067166 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:22:11.067175 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:22:11.067185 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:22:11.067195 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 12 20:22:11.067205 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:22:11.067215 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:22:11.067224 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:22:11.067234 kernel: pnp: PnP ACPI init Feb 12 20:22:11.067345 kernel: pnp 00:03: [dma 2] Feb 12 20:22:11.067367 kernel: pnp: PnP ACPI: found 5 devices Feb 12 20:22:11.067377 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:22:11.067387 kernel: NET: Registered PF_INET protocol family Feb 12 20:22:11.067397 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:22:11.067408 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 20:22:11.067418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:22:11.067428 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:22:11.067437 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 20:22:11.067450 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 20:22:11.067460 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:22:11.067469 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:22:11.067479 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:22:11.067489 kernel: NET: Registered PF_XDP protocol family Feb 12 20:22:11.067593 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:22:11.067691 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:22:11.067802 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:22:11.067917 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 20:22:11.068052 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:22:11.068158 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:22:11.068261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:22:11.068361 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:22:11.068375 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:22:11.068385 kernel: Initialise system trusted keyrings Feb 12 20:22:11.068395 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 20:22:11.068410 kernel: Key type asymmetric registered Feb 12 20:22:11.068419 kernel: Asymmetric key parser 'x509' registered Feb 12 20:22:11.068429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:22:11.068439 kernel: io scheduler mq-deadline registered Feb 12 20:22:11.068449 kernel: io scheduler kyber registered Feb 12 20:22:11.068458 kernel: io scheduler bfq registered Feb 12 20:22:11.068468 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:22:11.068479 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 20:22:11.068489 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:22:11.068499 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 20:22:11.068512 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:22:11.068522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:22:11.068532 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:22:11.068542 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:22:11.068551 kernel: random: crng init done Feb 12 20:22:11.068561 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:22:11.068571 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:22:11.068580 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:22:11.068727 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 12 20:22:11.068837 kernel: rtc_cmos 00:04: registered as rtc0 Feb 12 20:22:11.083655 kernel: rtc_cmos 00:04: setting system clock to 2024-02-12T20:22:10 UTC (1707769330) Feb 12 20:22:11.083775 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 12 20:22:11.083789 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:22:11.083798 kernel: Segment Routing with IPv6 Feb 12 20:22:11.083808 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:22:11.083816 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:22:11.083825 kernel: Key type dns_resolver registered Feb 12 20:22:11.083849 kernel: IPI shorthand broadcast: enabled Feb 12 20:22:11.083858 kernel: sched_clock: Marking stable (691053765, 120525818)->(837695217, -26115634) Feb 12 20:22:11.083866 kernel: registered taskstats version 1 Feb 12 20:22:11.083875 kernel: Loading compiled-in X.509 certificates Feb 12 20:22:11.083884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:22:11.083892 kernel: Key type .fscrypt registered Feb 12 20:22:11.083900 kernel: Key type fscrypt-provisioning registered Feb 12 20:22:11.083908 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:22:11.083919 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:22:11.083927 kernel: ima: No architecture policies found Feb 12 20:22:11.083936 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:22:11.083944 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:22:11.083952 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:22:11.083960 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:22:11.083968 kernel: Run /init as init process Feb 12 20:22:11.083976 kernel: with arguments: Feb 12 20:22:11.084011 kernel: /init Feb 12 20:22:11.084023 kernel: with environment: Feb 12 20:22:11.084031 kernel: HOME=/ Feb 12 20:22:11.084039 kernel: TERM=linux Feb 12 20:22:11.084047 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:22:11.084059 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:22:11.084070 systemd[1]: Detected virtualization kvm. Feb 12 20:22:11.084079 systemd[1]: Detected architecture x86-64. Feb 12 20:22:11.084088 systemd[1]: Running in initrd. Feb 12 20:22:11.084100 systemd[1]: No hostname configured, using default hostname. Feb 12 20:22:11.084108 systemd[1]: Hostname set to . Feb 12 20:22:11.084118 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:22:11.084127 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:22:11.084135 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:22:11.084143 systemd[1]: Reached target cryptsetup.target. Feb 12 20:22:11.084152 systemd[1]: Reached target paths.target. Feb 12 20:22:11.084160 systemd[1]: Reached target slices.target. Feb 12 20:22:11.084170 systemd[1]: Reached target swap.target. Feb 12 20:22:11.084179 systemd[1]: Reached target timers.target. Feb 12 20:22:11.084187 systemd[1]: Listening on iscsid.socket. Feb 12 20:22:11.084196 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:22:11.084205 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:22:11.084213 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:22:11.084222 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:22:11.084231 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:22:11.084241 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:22:11.084250 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:22:11.084258 systemd[1]: Reached target sockets.target. Feb 12 20:22:11.084267 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:22:11.084288 systemd[1]: Finished network-cleanup.service. Feb 12 20:22:11.084298 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:22:11.084310 systemd[1]: Starting systemd-journald.service... Feb 12 20:22:11.084318 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:22:11.084327 systemd[1]: Starting systemd-resolved.service... Feb 12 20:22:11.084336 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:22:11.084345 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:22:11.084354 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:22:11.084363 kernel: audit: type=1130 audit(1707769331.056:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.084373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:22:11.084386 systemd-journald[185]: Journal started Feb 12 20:22:11.084445 systemd-journald[185]: Runtime Journal (/run/log/journal/7651be1214464185ad865b3920c07bf0) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:22:11.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.085097 systemd-modules-load[186]: Inserted module 'overlay' Feb 12 20:22:11.129649 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:22:11.129675 kernel: Bridge firewalling registered Feb 12 20:22:11.129703 systemd[1]: Started systemd-journald.service. Feb 12 20:22:11.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.089513 systemd-resolved[187]: Positive Trust Anchors: Feb 12 20:22:11.089524 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:22:11.136150 kernel: audit: type=1130 audit(1707769331.128:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.136173 kernel: SCSI subsystem initialized Feb 12 20:22:11.089563 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:22:11.099648 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 12 20:22:11.112204 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 12 20:22:11.160490 kernel: audit: type=1130 audit(1707769331.136:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.160514 kernel: audit: type=1130 audit(1707769331.138:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.160526 kernel: audit: type=1130 audit(1707769331.139:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.160537 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:22:11.160548 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:22:11.160559 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:22:11.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.130133 systemd[1]: Started systemd-resolved.service. Feb 12 20:22:11.138516 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:22:11.140050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:22:11.140653 systemd[1]: Reached target nss-lookup.target. Feb 12 20:22:11.145556 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:22:11.163131 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 12 20:22:11.164077 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:22:11.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.172043 kernel: audit: type=1130 audit(1707769331.166:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.172208 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:22:11.182165 kernel: audit: type=1130 audit(1707769331.173:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.173299 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:22:11.187844 kernel: audit: type=1130 audit(1707769331.182:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.175589 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:22:11.182221 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:22:11.192746 dracut-cmdline[206]: dracut-dracut-053 Feb 12 20:22:11.195094 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:22:11.262088 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:22:11.275056 kernel: iscsi: registered transport (tcp) Feb 12 20:22:11.299159 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:22:11.299225 kernel: QLogic iSCSI HBA Driver Feb 12 20:22:11.352722 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:22:11.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.356100 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:22:11.360021 kernel: audit: type=1130 audit(1707769331.352:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.441149 kernel: raid6: sse2x4 gen() 11257 MB/s Feb 12 20:22:11.458078 kernel: raid6: sse2x4 xor() 5090 MB/s Feb 12 20:22:11.475080 kernel: raid6: sse2x2 gen() 14405 MB/s Feb 12 20:22:11.492080 kernel: raid6: sse2x2 xor() 8816 MB/s Feb 12 20:22:11.509080 kernel: raid6: sse2x1 gen() 11156 MB/s Feb 12 20:22:11.526776 kernel: raid6: sse2x1 xor() 7022 MB/s Feb 12 20:22:11.526844 kernel: raid6: using algorithm sse2x2 gen() 14405 MB/s Feb 12 20:22:11.526875 kernel: raid6: .... xor() 8816 MB/s, rmw enabled Feb 12 20:22:11.527602 kernel: raid6: using ssse3x2 recovery algorithm Feb 12 20:22:11.542080 kernel: xor: measuring software checksum speed Feb 12 20:22:11.543033 kernel: prefetch64-sse : 18464 MB/sec Feb 12 20:22:11.545366 kernel: generic_sse : 16819 MB/sec Feb 12 20:22:11.545413 kernel: xor: using function: prefetch64-sse (18464 MB/sec) Feb 12 20:22:11.656093 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:22:11.672796 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:22:11.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.673000 audit: BPF prog-id=7 op=LOAD Feb 12 20:22:11.675000 audit: BPF prog-id=8 op=LOAD Feb 12 20:22:11.676187 systemd[1]: Starting systemd-udevd.service... Feb 12 20:22:11.690690 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 20:22:11.703160 systemd[1]: Started systemd-udevd.service. Feb 12 20:22:11.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.708505 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:22:11.732371 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Feb 12 20:22:11.778794 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:22:11.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.780140 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:22:11.836074 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:22:11.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:11.884047 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 12 20:22:11.901392 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:22:11.901461 kernel: GPT:17805311 != 41943039 Feb 12 20:22:11.901473 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:22:11.902536 kernel: GPT:17805311 != 41943039 Feb 12 20:22:11.902558 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:22:11.902569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:22:11.924024 kernel: libata version 3.00 loaded. Feb 12 20:22:11.929274 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:22:11.935011 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (431) Feb 12 20:22:11.946014 kernel: scsi host0: ata_piix Feb 12 20:22:11.959794 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:22:11.976679 kernel: scsi host1: ata_piix Feb 12 20:22:11.976851 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 12 20:22:11.976865 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 12 20:22:11.980960 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:22:11.984261 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:22:11.984816 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:22:11.989668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:22:11.991053 systemd[1]: Starting disk-uuid.service... Feb 12 20:22:12.077335 disk-uuid[460]: Primary Header is updated. Feb 12 20:22:12.077335 disk-uuid[460]: Secondary Entries is updated. Feb 12 20:22:12.077335 disk-uuid[460]: Secondary Header is updated. Feb 12 20:22:12.088039 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:22:12.100032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:22:12.113027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:22:13.119460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:22:13.120543 disk-uuid[461]: The operation has completed successfully. Feb 12 20:22:13.185813 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:22:13.187359 systemd[1]: Finished disk-uuid.service. Feb 12 20:22:13.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.214536 systemd[1]: Starting verity-setup.service... Feb 12 20:22:13.254571 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 12 20:22:13.370924 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:22:13.375250 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:22:13.382053 systemd[1]: Finished verity-setup.service. Feb 12 20:22:13.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.522015 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:22:13.522719 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:22:13.523791 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:22:13.525301 systemd[1]: Starting ignition-setup.service... Feb 12 20:22:13.526957 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:22:13.548522 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:22:13.548570 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:22:13.548585 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:22:13.569593 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:22:13.589921 systemd[1]: Finished ignition-setup.service. Feb 12 20:22:13.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.591284 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:22:13.691309 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:22:13.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.694000 audit: BPF prog-id=9 op=LOAD Feb 12 20:22:13.695624 systemd[1]: Starting systemd-networkd.service... Feb 12 20:22:13.745161 systemd-networkd[631]: lo: Link UP Feb 12 20:22:13.745910 systemd-networkd[631]: lo: Gained carrier Feb 12 20:22:13.747208 systemd-networkd[631]: Enumeration completed Feb 12 20:22:13.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.747296 systemd[1]: Started systemd-networkd.service. Feb 12 20:22:13.747902 systemd[1]: Reached target network.target. Feb 12 20:22:13.749898 systemd[1]: Starting iscsiuio.service... Feb 12 20:22:13.750455 systemd-networkd[631]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:22:13.752259 systemd-networkd[631]: eth0: Link UP Feb 12 20:22:13.752263 systemd-networkd[631]: eth0: Gained carrier Feb 12 20:22:13.783776 systemd[1]: Started iscsiuio.service. Feb 12 20:22:13.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.785281 systemd[1]: Starting iscsid.service... Feb 12 20:22:13.796081 iscsid[636]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:22:13.796081 iscsid[636]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:22:13.796081 iscsid[636]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:22:13.796081 iscsid[636]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:22:13.796081 iscsid[636]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:22:13.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.805545 iscsid[636]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:22:13.798149 systemd[1]: Started iscsid.service. Feb 12 20:22:13.799123 systemd-networkd[631]: eth0: DHCPv4 address 172.24.4.211/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:22:13.801579 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:22:13.822919 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:22:13.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.823536 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:22:13.824054 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:22:13.824561 systemd[1]: Reached target remote-fs.target. Feb 12 20:22:13.827085 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:22:13.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:13.837805 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:22:14.028930 ignition[573]: Ignition 2.14.0 Feb 12 20:22:14.030067 ignition[573]: Stage: fetch-offline Feb 12 20:22:14.030277 ignition[573]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:14.030325 ignition[573]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:22:14.032699 ignition[573]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:22:14.032925 ignition[573]: parsed url from cmdline: "" Feb 12 20:22:14.035847 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:22:14.032934 ignition[573]: no config URL provided Feb 12 20:22:14.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.036736 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.211 Feb 12 20:22:14.032947 ignition[573]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:22:14.036758 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Feb 12 20:22:14.032966 ignition[573]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:22:14.039413 systemd[1]: Starting ignition-fetch.service... Feb 12 20:22:14.032985 ignition[573]: failed to fetch config: resource requires networking Feb 12 20:22:14.033273 ignition[573]: Ignition finished successfully Feb 12 20:22:14.071633 ignition[654]: Ignition 2.14.0 Feb 12 20:22:14.071660 ignition[654]: Stage: fetch Feb 12 20:22:14.071930 ignition[654]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:14.071973 ignition[654]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:22:14.074107 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:22:14.074323 ignition[654]: parsed url from cmdline: "" Feb 12 20:22:14.074333 ignition[654]: no config URL provided Feb 12 20:22:14.074347 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:22:14.074367 ignition[654]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:22:14.081902 ignition[654]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 12 20:22:14.081969 ignition[654]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 12 20:22:14.082024 ignition[654]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 12 20:22:14.525522 ignition[654]: GET result: OK Feb 12 20:22:14.525766 ignition[654]: parsing config with SHA512: c750970f590ed9489bb64a2da4ad2c5110cb710bf2487ce89e8ddf6cdb71cb32e33e58782c3140523c71edb12946b8af6ad508c6e3c5c274d9f325b48e2b1164 Feb 12 20:22:14.642450 unknown[654]: fetched base config from "system" Feb 12 20:22:14.644062 unknown[654]: fetched base config from "system" Feb 12 20:22:14.645393 unknown[654]: fetched user config from "openstack" Feb 12 20:22:14.647751 ignition[654]: fetch: fetch complete Feb 12 20:22:14.647779 ignition[654]: fetch: fetch passed Feb 12 20:22:14.647913 ignition[654]: Ignition finished successfully Feb 12 20:22:14.650919 systemd[1]: Finished ignition-fetch.service. Feb 12 20:22:14.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.654560 systemd[1]: Starting ignition-kargs.service... Feb 12 20:22:14.677855 ignition[660]: Ignition 2.14.0 Feb 12 20:22:14.677884 ignition[660]: Stage: kargs Feb 12 20:22:14.678206 ignition[660]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:14.678248 ignition[660]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:22:14.680458 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:22:14.693574 ignition[660]: kargs: kargs passed Feb 12 20:22:14.693647 ignition[660]: Ignition finished successfully Feb 12 20:22:14.695048 systemd[1]: Finished ignition-kargs.service. Feb 12 20:22:14.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.697830 systemd[1]: Starting ignition-disks.service... Feb 12 20:22:14.705770 ignition[666]: Ignition 2.14.0 Feb 12 20:22:14.705785 ignition[666]: Stage: disks Feb 12 20:22:14.705891 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:14.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.709829 systemd[1]: Finished ignition-disks.service. Feb 12 20:22:14.705911 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:22:14.711675 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:22:14.706828 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:22:14.712767 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:22:14.708324 ignition[666]: disks: disks passed Feb 12 20:22:14.713863 systemd[1]: Reached target local-fs.target. Feb 12 20:22:14.708541 ignition[666]: Ignition finished successfully Feb 12 20:22:14.715391 systemd[1]: Reached target sysinit.target. Feb 12 20:22:14.716953 systemd[1]: Reached target basic.target. Feb 12 20:22:14.720413 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:22:14.742907 systemd-fsck[674]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 20:22:14.751429 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:22:14.752757 systemd[1]: Mounting sysroot.mount... Feb 12 20:22:14.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.771029 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:22:14.771779 systemd[1]: Mounted sysroot.mount. Feb 12 20:22:14.772322 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:22:14.775427 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:22:14.776260 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:22:14.776942 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 12 20:22:14.780889 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:22:14.781759 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:22:14.785302 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:22:14.792614 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:22:14.795740 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:22:14.811011 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Feb 12 20:22:14.813373 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:22:14.824242 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:22:14.824273 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:22:14.824284 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:22:14.832535 initrd-setup-root[710]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:22:14.839352 initrd-setup-root[718]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:22:14.848337 initrd-setup-root[728]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:22:14.848340 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:22:14.930204 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:22:14.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.933152 systemd[1]: Starting ignition-mount.service... Feb 12 20:22:14.935771 systemd[1]: Starting sysroot-boot.service... Feb 12 20:22:14.952807 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:22:14.953088 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:22:14.978835 ignition[749]: INFO : Ignition 2.14.0 Feb 12 20:22:14.979613 ignition[749]: INFO : Stage: mount Feb 12 20:22:14.980256 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:14.980942 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:22:14.983215 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:22:14.985358 ignition[749]: INFO : mount: mount passed Feb 12 20:22:14.989259 ignition[749]: INFO : Ignition finished successfully Feb 12 20:22:14.990567 systemd[1]: Finished ignition-mount.service. Feb 12 20:22:14.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:14.992877 systemd[1]: Finished sysroot-boot.service. Feb 12 20:22:14.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:15.008137 coreos-metadata[680]: Feb 12 20:22:15.008 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:22:15.028268 coreos-metadata[680]: Feb 12 20:22:15.028 INFO Fetch successful Feb 12 20:22:15.028268 coreos-metadata[680]: Feb 12 20:22:15.028 INFO wrote hostname ci-3510-3-2-4-c19eb846e8.novalocal to /sysroot/etc/hostname Feb 12 20:22:15.031671 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 12 20:22:15.031797 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 12 20:22:15.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:15.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:15.033828 systemd[1]: Starting ignition-files.service... Feb 12 20:22:15.041321 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:22:15.056036 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Feb 12 20:22:15.061426 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:22:15.061449 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:22:15.061461 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:22:15.070413 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:22:15.086178 ignition[777]: INFO : Ignition 2.14.0 Feb 12 20:22:15.086178 ignition[777]: INFO : Stage: files Feb 12 20:22:15.087227 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:15.087227 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:22:15.088890 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:22:15.091980 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:22:15.093248 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:22:15.093248 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:22:15.097281 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:22:15.098028 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:22:15.099105 unknown[777]: wrote ssh authorized keys file for user: core Feb 12 20:22:15.099740 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:22:15.100459 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:22:15.100459 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:22:15.205864 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:22:15.547102 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:22:15.547102 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:22:15.552187 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:22:15.776397 systemd-networkd[631]: eth0: Gained IPv6LL Feb 12 20:22:15.944165 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:22:16.410091 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 20:22:16.410091 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:22:16.410091 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:22:16.410091 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 20:22:16.906156 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:22:17.774730 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 20:22:17.774730 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:22:17.788449 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:22:17.788449 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:22:17.788449 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:22:17.788449 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:22:18.265691 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:22:38.849528 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 12 20:22:38.854176 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:22:38.854176 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:22:38.854176 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:22:39.454518 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:23:32.255924 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 12 20:23:32.262608 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:23:32.262608 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:23:32.262608 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:23:33.028534 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:23:58.818334 ignition[777]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 12 20:23:58.822675 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:23:58.822675 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:23:58.822675 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:23:59.215223 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 20:23:59.712764 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:23:59.715231 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:23:59.717610 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:23:59.736955 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:23:59.736955 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Feb 12 20:23:59.736955 ignition[777]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:23:59.799744 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 12 20:23:59.799771 kernel: audit: type=1130 audit(1707769439.745:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.799785 kernel: audit: type=1130 audit(1707769439.771:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.799798 kernel: audit: type=1130 audit(1707769439.783:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.799809 kernel: audit: type=1131 audit(1707769439.783:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(18): [started] processing unit "coreos-metadata.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(18): op(19): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(18): [finished] processing unit "coreos-metadata.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:23:59.799957 ignition[777]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:23:59.799957 ignition[777]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:23:59.799957 ignition[777]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:23:59.799957 ignition[777]: INFO : files: files passed Feb 12 20:23:59.799957 ignition[777]: INFO : Ignition finished successfully Feb 12 20:23:59.742197 systemd[1]: Finished ignition-files.service. Feb 12 20:23:59.749700 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:23:59.814851 initrd-setup-root-after-ignition[800]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:23:59.765282 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:23:59.766863 systemd[1]: Starting ignition-quench.service... Feb 12 20:23:59.771606 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:23:59.773312 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:23:59.773481 systemd[1]: Finished ignition-quench.service. Feb 12 20:23:59.783861 systemd[1]: Reached target ignition-complete.target. Feb 12 20:23:59.793623 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:23:59.823629 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:23:59.823738 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:23:59.831643 kernel: audit: type=1130 audit(1707769439.823:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.831662 kernel: audit: type=1131 audit(1707769439.823:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.824872 systemd[1]: Reached target initrd-fs.target. Feb 12 20:23:59.832092 systemd[1]: Reached target initrd.target. Feb 12 20:23:59.833060 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:23:59.833757 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:23:59.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.845897 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:23:59.850402 kernel: audit: type=1130 audit(1707769439.845:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.850567 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:23:59.860564 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:23:59.861674 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:23:59.862740 systemd[1]: Stopped target timers.target. Feb 12 20:23:59.863713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:23:59.864385 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:23:59.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.865544 systemd[1]: Stopped target initrd.target. Feb 12 20:23:59.869263 kernel: audit: type=1131 audit(1707769439.864:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.869829 systemd[1]: Stopped target basic.target. Feb 12 20:23:59.870367 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:23:59.871308 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:23:59.872306 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:23:59.873226 systemd[1]: Stopped target remote-fs.target. Feb 12 20:23:59.874130 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:23:59.875030 systemd[1]: Stopped target sysinit.target. Feb 12 20:23:59.875894 systemd[1]: Stopped target local-fs.target. Feb 12 20:23:59.876775 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:23:59.877648 systemd[1]: Stopped target swap.target. Feb 12 20:23:59.878487 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:23:59.883115 kernel: audit: type=1131 audit(1707769439.878:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.878610 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:23:59.879460 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:23:59.888192 kernel: audit: type=1131 audit(1707769439.883:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.883564 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:23:59.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.883665 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:23:59.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.884619 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:23:59.884735 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:23:59.888749 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:23:59.895417 iscsid[636]: iscsid shutting down. Feb 12 20:23:59.888848 systemd[1]: Stopped ignition-files.service. Feb 12 20:23:59.890500 systemd[1]: Stopping ignition-mount.service... Feb 12 20:23:59.891375 systemd[1]: Stopping iscsid.service... Feb 12 20:23:59.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.899266 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:23:59.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.906011 ignition[815]: INFO : Ignition 2.14.0 Feb 12 20:23:59.906011 ignition[815]: INFO : Stage: umount Feb 12 20:23:59.906011 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:23:59.906011 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:23:59.906011 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:23:59.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.899698 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:23:59.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.915859 ignition[815]: INFO : umount: umount passed Feb 12 20:23:59.915859 ignition[815]: INFO : Ignition finished successfully Feb 12 20:23:59.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.899844 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:23:59.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.900524 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:23:59.900672 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:23:59.903156 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:23:59.903243 systemd[1]: Stopped iscsid.service. Feb 12 20:23:59.905464 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:23:59.905539 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:23:59.906916 systemd[1]: Stopping iscsiuio.service... Feb 12 20:23:59.911821 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:23:59.911898 systemd[1]: Stopped iscsiuio.service. Feb 12 20:23:59.913244 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:23:59.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.913320 systemd[1]: Stopped ignition-mount.service. Feb 12 20:23:59.914106 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:23:59.914155 systemd[1]: Stopped ignition-disks.service. Feb 12 20:23:59.915215 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:23:59.915251 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:23:59.916155 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:23:59.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.916190 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:23:59.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.917471 systemd[1]: Stopped target network.target. Feb 12 20:23:59.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.917867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:23:59.917905 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:23:59.919036 systemd[1]: Stopped target paths.target. Feb 12 20:23:59.920078 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:23:59.924041 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:23:59.924469 systemd[1]: Stopped target slices.target. Feb 12 20:23:59.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.924868 systemd[1]: Stopped target sockets.target. Feb 12 20:23:59.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.925300 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:23:59.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.925331 systemd[1]: Closed iscsid.socket. Feb 12 20:23:59.925730 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:23:59.925760 systemd[1]: Closed iscsiuio.socket. Feb 12 20:23:59.926220 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:23:59.926268 systemd[1]: Stopped ignition-setup.service. Feb 12 20:23:59.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.927189 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:23:59.928348 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:23:59.950000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:23:59.930302 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:23:59.930787 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:23:59.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.930873 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:23:59.931934 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:23:59.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.931973 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:23:59.932115 systemd-networkd[631]: eth0: DHCPv6 lease lost Feb 12 20:23:59.959000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:23:59.933779 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:23:59.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.933862 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:23:59.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.936348 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:23:59.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.936383 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:23:59.938525 systemd[1]: Stopping network-cleanup.service... Feb 12 20:23:59.941571 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:23:59.941710 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:23:59.942483 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:23:59.942530 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:23:59.943643 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:23:59.943682 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:23:59.944907 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:23:59.946970 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:23:59.947479 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:23:59.947579 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:23:59.952610 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:23:59.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.952758 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:23:59.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.954564 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:23:59.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.954674 systemd[1]: Stopped network-cleanup.service. Feb 12 20:23:59.955698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:23:59.955757 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:23:59.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:59.956565 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:23:59.956607 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:23:59.961190 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:23:59.961239 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:23:59.962227 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:23:59.962276 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:23:59.963171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:23:59.963205 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:23:59.964843 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:23:59.971167 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:23:59.971227 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:23:59.972347 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:23:59.972386 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:23:59.973050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:23:59.973087 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:23:59.975047 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:23:59.975468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:23:59.975551 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:23:59.976233 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:23:59.977627 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:23:59.996165 systemd[1]: Switching root. Feb 12 20:24:00.013846 systemd-journald[185]: Journal stopped Feb 12 20:24:04.531117 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 12 20:24:04.531179 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:24:04.531196 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:24:04.531208 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:24:04.531222 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:24:04.531233 kernel: SELinux: policy capability open_perms=1 Feb 12 20:24:04.531244 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:24:04.531255 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:24:04.531273 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:24:04.531284 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:24:04.531294 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:24:04.531305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:24:04.531321 systemd[1]: Successfully loaded SELinux policy in 98.568ms. Feb 12 20:24:04.531339 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.259ms. Feb 12 20:24:04.531353 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:24:04.531365 systemd[1]: Detected virtualization kvm. Feb 12 20:24:04.531376 systemd[1]: Detected architecture x86-64. Feb 12 20:24:04.531388 systemd[1]: Detected first boot. Feb 12 20:24:04.531399 systemd[1]: Hostname set to . Feb 12 20:24:04.531415 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:24:04.531427 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:24:04.531442 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:24:04.531454 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:04.531466 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:04.531479 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:04.531491 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:24:04.531504 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:24:04.531515 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:24:04.531527 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:24:04.531538 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:24:04.531550 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:24:04.531562 systemd[1]: Created slice system-getty.slice. Feb 12 20:24:04.531573 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:24:04.531584 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:24:04.531596 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:24:04.531610 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:24:04.531621 systemd[1]: Created slice user.slice. Feb 12 20:24:04.531632 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:24:04.531644 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:24:04.531655 systemd[1]: Set up automount boot.automount. Feb 12 20:24:04.531667 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:24:04.531680 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:24:04.531691 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:24:04.531703 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:24:04.531714 systemd[1]: Reached target integritysetup.target. Feb 12 20:24:04.531766 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:24:04.531779 systemd[1]: Reached target remote-fs.target. Feb 12 20:24:04.531791 systemd[1]: Reached target slices.target. Feb 12 20:24:04.531808 systemd[1]: Reached target swap.target. Feb 12 20:24:04.531820 systemd[1]: Reached target torcx.target. Feb 12 20:24:04.531834 systemd[1]: Reached target veritysetup.target. Feb 12 20:24:04.531845 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:24:04.531856 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:24:04.531868 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:24:04.531879 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:24:04.531891 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:24:04.531902 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:24:04.531913 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:24:04.531925 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:24:04.531936 systemd[1]: Mounting media.mount... Feb 12 20:24:04.531950 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:24:04.531961 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:24:04.531973 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:24:04.532001 systemd[1]: Mounting tmp.mount... Feb 12 20:24:04.532015 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:24:04.532027 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:24:04.532040 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:24:04.532053 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:24:04.532065 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:24:04.532082 systemd[1]: Starting modprobe@drm.service... Feb 12 20:24:04.532094 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:24:04.532106 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:24:04.532119 systemd[1]: Starting modprobe@loop.service... Feb 12 20:24:04.532132 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:24:04.532145 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:24:04.532157 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:24:04.532170 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:24:04.532185 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:24:04.532197 systemd[1]: Stopped systemd-journald.service. Feb 12 20:24:04.532210 systemd[1]: Starting systemd-journald.service... Feb 12 20:24:04.532223 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:24:04.532235 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:24:04.532248 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:24:04.532262 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:24:04.532274 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:24:04.532286 systemd[1]: Stopped verity-setup.service. Feb 12 20:24:04.532299 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:24:04.532313 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:24:04.532325 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:24:04.532337 systemd[1]: Mounted media.mount. Feb 12 20:24:04.532351 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:24:04.532363 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:24:04.532375 kernel: loop: module loaded Feb 12 20:24:04.532388 systemd[1]: Mounted tmp.mount. Feb 12 20:24:04.532401 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:24:04.532413 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:24:04.532427 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:24:04.532440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:24:04.532452 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:24:04.532464 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:24:04.532481 systemd-journald[918]: Journal started Feb 12 20:24:04.532528 systemd-journald[918]: Runtime Journal (/run/log/journal/7651be1214464185ad865b3920c07bf0) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:24:00.347000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:24:00.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:00.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:24:00.479000 audit: BPF prog-id=10 op=LOAD Feb 12 20:24:00.479000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:24:00.479000 audit: BPF prog-id=11 op=LOAD Feb 12 20:24:00.479000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:24:00.650000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:24:00.650000 audit[848]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cede0 a2=c0000d7ac0 a3=32 items=0 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:00.650000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:24:00.654000 audit[848]: AVC avc: denied { associate } for pid=848 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:24:00.654000 audit[848]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=831 pid=848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:00.654000 audit: CWD cwd="/" Feb 12 20:24:04.538385 systemd[1]: Finished modprobe@drm.service. Feb 12 20:24:04.538421 systemd[1]: Started systemd-journald.service. Feb 12 20:24:00.654000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:00.654000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:00.654000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:24:04.298000 audit: BPF prog-id=12 op=LOAD Feb 12 20:24:04.298000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:24:04.298000 audit: BPF prog-id=13 op=LOAD Feb 12 20:24:04.298000 audit: BPF prog-id=14 op=LOAD Feb 12 20:24:04.298000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:24:04.298000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:24:04.299000 audit: BPF prog-id=15 op=LOAD Feb 12 20:24:04.299000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:24:04.299000 audit: BPF prog-id=16 op=LOAD Feb 12 20:24:04.300000 audit: BPF prog-id=17 op=LOAD Feb 12 20:24:04.300000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:24:04.300000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:24:04.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.310000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:24:04.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.469000 audit: BPF prog-id=18 op=LOAD Feb 12 20:24:04.470000 audit: BPF prog-id=19 op=LOAD Feb 12 20:24:04.470000 audit: BPF prog-id=20 op=LOAD Feb 12 20:24:04.470000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:24:04.470000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:24:04.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.527000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:24:04.527000 audit[918]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffcd3d6d530 a2=4000 a3=7ffcd3d6d5cc items=0 ppid=1 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:04.527000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:24:04.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:00.635716 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:04.297623 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:24:00.640580 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:24:04.297636 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:24:00.640614 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:24:04.301712 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:24:00.640682 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:24:04.537427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:24:00.640695 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:24:04.537560 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:24:00.640732 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:24:04.538236 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:24:00.640748 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:24:04.538351 systemd[1]: Finished modprobe@loop.service. Feb 12 20:24:00.641056 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:24:04.539010 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:24:00.641112 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:24:04.539788 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:24:00.641137 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:24:04.540556 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:24:00.645293 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:24:04.541381 systemd[1]: Reached target network-pre.target. Feb 12 20:24:00.645338 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:24:04.543079 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:24:00.645361 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:24:04.546256 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:24:00.645380 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:24:04.550298 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:24:00.645402 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:24:00.645420 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:00Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:24:04.577756 kernel: fuse: init (API version 7.34) Feb 12 20:24:04.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.578024 systemd-journald[918]: Time spent on flushing to /var/log/journal/7651be1214464185ad865b3920c07bf0 is 43.335ms for 1125 entries. Feb 12 20:24:04.578024 systemd-journald[918]: System Journal (/var/log/journal/7651be1214464185ad865b3920c07bf0) is 8.0M, max 584.8M, 576.8M free. Feb 12 20:24:04.634650 systemd-journald[918]: Received client request to flush runtime journal. Feb 12 20:24:04.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:03.901506 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:04.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.552029 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:24:03.901782 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:04.552566 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:24:03.901897 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:04.553639 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:24:03.902131 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:24:04.554362 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:24:03.902191 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:24:04.555546 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:24:03.902267 /usr/lib/systemd/system-generators/torcx-generator[848]: time="2024-02-12T20:24:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:24:04.558621 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:24:04.571951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:24:04.572148 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:24:04.574302 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:24:04.575789 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:24:04.576353 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:24:04.578124 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:24:04.597666 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:24:04.626622 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:24:04.628525 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:24:04.635659 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:24:04.643673 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:24:04.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.645490 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:24:04.655496 udevadm[959]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:24:04.678839 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:24:04.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:04.680559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:24:04.718328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:24:04.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.665743 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:24:05.678080 kernel: kauditd_printk_skb: 102 callbacks suppressed Feb 12 20:24:05.678904 kernel: audit: type=1130 audit(1707769445.666:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.677000 audit: BPF prog-id=21 op=LOAD Feb 12 20:24:05.680464 systemd[1]: Starting systemd-udevd.service... Feb 12 20:24:05.677000 audit: BPF prog-id=22 op=LOAD Feb 12 20:24:05.677000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:24:05.677000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:24:05.685582 kernel: audit: type=1334 audit(1707769445.677:142): prog-id=21 op=LOAD Feb 12 20:24:05.685656 kernel: audit: type=1334 audit(1707769445.677:143): prog-id=22 op=LOAD Feb 12 20:24:05.685692 kernel: audit: type=1334 audit(1707769445.677:144): prog-id=7 op=UNLOAD Feb 12 20:24:05.685725 kernel: audit: type=1334 audit(1707769445.677:145): prog-id=8 op=UNLOAD Feb 12 20:24:05.730791 systemd-udevd[962]: Using default interface naming scheme 'v252'. Feb 12 20:24:05.774152 systemd[1]: Started systemd-udevd.service. Feb 12 20:24:05.790975 kernel: audit: type=1130 audit(1707769445.778:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.798279 kernel: audit: type=1334 audit(1707769445.792:147): prog-id=23 op=LOAD Feb 12 20:24:05.792000 audit: BPF prog-id=23 op=LOAD Feb 12 20:24:05.799359 systemd[1]: Starting systemd-networkd.service... Feb 12 20:24:05.824651 kernel: audit: type=1334 audit(1707769445.810:148): prog-id=24 op=LOAD Feb 12 20:24:05.826198 kernel: audit: type=1334 audit(1707769445.813:149): prog-id=25 op=LOAD Feb 12 20:24:05.826286 kernel: audit: type=1334 audit(1707769445.813:150): prog-id=26 op=LOAD Feb 12 20:24:05.810000 audit: BPF prog-id=24 op=LOAD Feb 12 20:24:05.813000 audit: BPF prog-id=25 op=LOAD Feb 12 20:24:05.813000 audit: BPF prog-id=26 op=LOAD Feb 12 20:24:05.819500 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:24:05.841407 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:24:05.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.865641 systemd[1]: Started systemd-userdbd.service. Feb 12 20:24:05.928000 audit[968]: AVC avc: denied { confidentiality } for pid=968 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:24:05.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:05.956310 systemd-networkd[978]: lo: Link UP Feb 12 20:24:05.956320 systemd-networkd[978]: lo: Gained carrier Feb 12 20:24:05.956846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:24:05.957246 systemd-networkd[978]: Enumeration completed Feb 12 20:24:05.957363 systemd-networkd[978]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:24:05.957524 systemd[1]: Started systemd-networkd.service. Feb 12 20:24:05.958899 systemd-networkd[978]: eth0: Link UP Feb 12 20:24:05.958914 systemd-networkd[978]: eth0: Gained carrier Feb 12 20:24:05.964051 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:24:05.928000 audit[968]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a272f3e210 a1=32194 a2=7f51e2f11bc5 a3=5 items=108 ppid=962 pid=968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:05.928000 audit: CWD cwd="/" Feb 12 20:24:05.928000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=1 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=2 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=3 name=(null) inode=13885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=4 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=5 name=(null) inode=13886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=6 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=7 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=8 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=9 name=(null) inode=13888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=10 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=11 name=(null) inode=13889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=12 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=13 name=(null) inode=13890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=14 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=15 name=(null) inode=13891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=16 name=(null) inode=13887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=17 name=(null) inode=13892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=18 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=19 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=20 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=21 name=(null) inode=13894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=22 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=23 name=(null) inode=13895 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=24 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=25 name=(null) inode=13896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=26 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=27 name=(null) inode=13897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=28 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=29 name=(null) inode=13898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=30 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.971459 systemd-networkd[978]: eth0: DHCPv4 address 172.24.4.211/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:24:05.928000 audit: PATH item=31 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=32 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=33 name=(null) inode=13900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=34 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=35 name=(null) inode=13901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=36 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=37 name=(null) inode=13902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=38 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=39 name=(null) inode=13903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=40 name=(null) inode=13899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=41 name=(null) inode=13904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=42 name=(null) inode=13884 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=43 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=44 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=45 name=(null) inode=13906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=46 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=47 name=(null) inode=13907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=48 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=49 name=(null) inode=13908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=50 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=51 name=(null) inode=13909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=52 name=(null) inode=13905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=53 name=(null) inode=13910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=55 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=56 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=57 name=(null) inode=13912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=58 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=59 name=(null) inode=13913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=60 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=61 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=62 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=63 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=64 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=65 name=(null) inode=13916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=66 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=67 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=68 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=69 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=70 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=71 name=(null) inode=13919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=72 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=73 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=74 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=75 name=(null) inode=13921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=76 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=77 name=(null) inode=13922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=78 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=79 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=80 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=81 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.980026 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:24:05.928000 audit: PATH item=82 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=83 name=(null) inode=13925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=84 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=85 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=86 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=87 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=88 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=89 name=(null) inode=13928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=90 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=91 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=92 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=93 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=94 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=95 name=(null) inode=13931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=96 name=(null) inode=13911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=97 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=98 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=99 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=100 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=101 name=(null) inode=13934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=102 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=103 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=104 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=105 name=(null) inode=13936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=106 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PATH item=107 name=(null) inode=13937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:05.928000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:24:05.997008 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:24:06.024037 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:24:06.034018 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:24:06.074504 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:24:06.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.076836 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:24:06.148041 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:06.196096 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:24:06.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.197522 systemd[1]: Reached target cryptsetup.target. Feb 12 20:24:06.201237 systemd[1]: Starting lvm2-activation.service... Feb 12 20:24:06.210282 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:24:06.249225 systemd[1]: Finished lvm2-activation.service. Feb 12 20:24:06.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.250582 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:24:06.251748 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:24:06.251812 systemd[1]: Reached target local-fs.target. Feb 12 20:24:06.252917 systemd[1]: Reached target machines.target. Feb 12 20:24:06.256706 systemd[1]: Starting ldconfig.service... Feb 12 20:24:06.259104 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:24:06.259215 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:06.261676 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:24:06.266311 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:24:06.273257 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:24:06.274657 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:06.274776 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:24:06.279311 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:24:06.305290 systemd[1]: boot.automount: Got automount request for /boot, triggered by 994 (bootctl) Feb 12 20:24:06.308062 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:24:06.337497 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:24:06.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.371691 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:24:06.415670 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:24:06.416949 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:24:06.490047 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:24:06.491627 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:24:06.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.658029 systemd-fsck[1004]: fsck.fat 4.2 (2021-01-31) Feb 12 20:24:06.658029 systemd-fsck[1004]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:24:06.662174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:24:06.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.666687 systemd[1]: Mounting boot.mount... Feb 12 20:24:06.693259 systemd[1]: Mounted boot.mount. Feb 12 20:24:06.739061 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:24:06.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.857682 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:24:06.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.861648 systemd[1]: Starting audit-rules.service... Feb 12 20:24:06.866158 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:24:06.871209 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:24:06.880000 audit: BPF prog-id=27 op=LOAD Feb 12 20:24:06.886000 audit: BPF prog-id=28 op=LOAD Feb 12 20:24:06.882764 systemd[1]: Starting systemd-resolved.service... Feb 12 20:24:06.891423 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:24:06.897388 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:24:06.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.902548 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:24:06.905433 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:24:06.916000 audit[1018]: SYSTEM_BOOT pid=1018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.920517 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:24:06.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:06.937329 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:24:06.975000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:24:06.975000 audit[1027]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffee86568c0 a2=420 a3=0 items=0 ppid=1007 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:06.975000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:24:06.976872 augenrules[1027]: No rules Feb 12 20:24:06.977958 systemd[1]: Finished audit-rules.service. Feb 12 20:24:06.987379 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:24:06.988015 systemd[1]: Reached target time-set.target. Feb 12 20:24:07.442597 systemd-timesyncd[1014]: Contacted time server 45.128.41.10:123 (0.flatcar.pool.ntp.org). Feb 12 20:24:07.443041 systemd-timesyncd[1014]: Initial clock synchronization to Mon 2024-02-12 20:24:07.442404 UTC. Feb 12 20:24:07.456948 systemd-resolved[1011]: Positive Trust Anchors: Feb 12 20:24:07.456970 systemd-resolved[1011]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:24:07.457022 systemd-resolved[1011]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:24:07.467712 systemd-resolved[1011]: Using system hostname 'ci-3510-3-2-4-c19eb846e8.novalocal'. Feb 12 20:24:07.469545 systemd[1]: Started systemd-resolved.service. Feb 12 20:24:07.471425 systemd[1]: Reached target network.target. Feb 12 20:24:07.472709 systemd[1]: Reached target nss-lookup.target. Feb 12 20:24:07.603611 ldconfig[993]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:24:07.616868 systemd[1]: Finished ldconfig.service. Feb 12 20:24:07.620992 systemd[1]: Starting systemd-update-done.service... Feb 12 20:24:07.634817 systemd[1]: Finished systemd-update-done.service. Feb 12 20:24:07.636285 systemd[1]: Reached target sysinit.target. Feb 12 20:24:07.637594 systemd[1]: Started motdgen.path. Feb 12 20:24:07.638680 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:24:07.640503 systemd[1]: Started logrotate.timer. Feb 12 20:24:07.641767 systemd[1]: Started mdadm.timer. Feb 12 20:24:07.642794 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:24:07.643907 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:24:07.643981 systemd[1]: Reached target paths.target. Feb 12 20:24:07.654253 systemd[1]: Reached target timers.target. Feb 12 20:24:07.655955 systemd[1]: Listening on dbus.socket. Feb 12 20:24:07.659369 systemd[1]: Starting docker.socket... Feb 12 20:24:07.666860 systemd[1]: Listening on sshd.socket. Feb 12 20:24:07.668241 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:07.669166 systemd[1]: Listening on docker.socket. Feb 12 20:24:07.670464 systemd[1]: Reached target sockets.target. Feb 12 20:24:07.671563 systemd[1]: Reached target basic.target. Feb 12 20:24:07.672737 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:07.672808 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:24:07.675058 systemd[1]: Starting containerd.service... Feb 12 20:24:07.678326 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:24:07.681804 systemd[1]: Starting dbus.service... Feb 12 20:24:07.688023 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:24:07.700466 systemd[1]: Starting extend-filesystems.service... Feb 12 20:24:07.703395 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:24:07.706742 systemd[1]: Starting motdgen.service... Feb 12 20:24:07.714442 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:24:07.719357 systemd[1]: Starting prepare-critools.service... Feb 12 20:24:07.724319 systemd[1]: Starting prepare-helm.service... Feb 12 20:24:07.726153 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:24:07.727848 systemd[1]: Starting sshd-keygen.service... Feb 12 20:24:07.735407 systemd[1]: Starting systemd-logind.service... Feb 12 20:24:07.735953 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:24:07.736020 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:24:07.736539 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:24:07.737288 systemd[1]: Starting update-engine.service... Feb 12 20:24:07.738656 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:24:07.754152 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:24:07.754357 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:24:07.758224 jq[1053]: true Feb 12 20:24:07.772881 jq[1041]: false Feb 12 20:24:07.777154 tar[1056]: ./ Feb 12 20:24:07.777154 tar[1056]: ./loopback Feb 12 20:24:07.778254 tar[1058]: crictl Feb 12 20:24:07.785371 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:24:07.785575 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:24:07.788984 jq[1060]: true Feb 12 20:24:07.789233 tar[1059]: linux-amd64/helm Feb 12 20:24:07.828014 extend-filesystems[1042]: Found vda Feb 12 20:24:07.829567 extend-filesystems[1042]: Found vda1 Feb 12 20:24:07.830416 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:24:07.830612 systemd[1]: Finished motdgen.service. Feb 12 20:24:07.832263 extend-filesystems[1042]: Found vda2 Feb 12 20:24:07.832939 extend-filesystems[1042]: Found vda3 Feb 12 20:24:07.833829 dbus-daemon[1038]: [system] SELinux support is enabled Feb 12 20:24:07.833945 systemd[1]: Started dbus.service. Feb 12 20:24:07.834342 extend-filesystems[1042]: Found usr Feb 12 20:24:07.835426 extend-filesystems[1042]: Found vda4 Feb 12 20:24:07.835941 extend-filesystems[1042]: Found vda6 Feb 12 20:24:07.836385 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:24:07.836411 systemd[1]: Reached target system-config.target. Feb 12 20:24:07.836792 extend-filesystems[1042]: Found vda7 Feb 12 20:24:07.836792 extend-filesystems[1042]: Found vda9 Feb 12 20:24:07.836792 extend-filesystems[1042]: Checking size of /dev/vda9 Feb 12 20:24:07.837025 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:24:07.837041 systemd[1]: Reached target user-config.target. Feb 12 20:24:07.899456 env[1061]: time="2024-02-12T20:24:07.899393907Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:24:07.915935 bash[1092]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:24:07.916956 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:24:07.928935 extend-filesystems[1042]: Resized partition /dev/vda9 Feb 12 20:24:07.943481 extend-filesystems[1099]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:24:07.953670 systemd-logind[1050]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:24:07.953699 systemd-logind[1050]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:24:07.955556 update_engine[1052]: I0212 20:24:07.954459 1052 main.cc:92] Flatcar Update Engine starting Feb 12 20:24:07.956247 systemd-logind[1050]: New seat seat0. Feb 12 20:24:07.962153 systemd[1]: Started systemd-logind.service. Feb 12 20:24:07.965097 systemd[1]: Started update-engine.service. Feb 12 20:24:07.966886 update_engine[1052]: I0212 20:24:07.966755 1052 update_check_scheduler.cc:74] Next update check in 4m6s Feb 12 20:24:07.967615 systemd[1]: Started locksmithd.service. Feb 12 20:24:07.981244 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 12 20:24:08.015561 systemd-networkd[978]: eth0: Gained IPv6LL Feb 12 20:24:08.028610 env[1061]: time="2024-02-12T20:24:08.027764897Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:24:08.028610 env[1061]: time="2024-02-12T20:24:08.027956957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:08.048425 env[1061]: time="2024-02-12T20:24:08.048369210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:08.048425 env[1061]: time="2024-02-12T20:24:08.048413594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:08.048711 env[1061]: time="2024-02-12T20:24:08.048678911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:08.048711 env[1061]: time="2024-02-12T20:24:08.048705141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:08.048788 env[1061]: time="2024-02-12T20:24:08.048725629Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:24:08.048788 env[1061]: time="2024-02-12T20:24:08.048738934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:08.048838 env[1061]: time="2024-02-12T20:24:08.048825697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:08.049090 env[1061]: time="2024-02-12T20:24:08.049062992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:24:08.049234 env[1061]: time="2024-02-12T20:24:08.049190361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:24:08.049234 env[1061]: time="2024-02-12T20:24:08.049231327Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:24:08.049304 env[1061]: time="2024-02-12T20:24:08.049285088Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:24:08.049304 env[1061]: time="2024-02-12T20:24:08.049300026Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:24:08.061099 coreos-metadata[1037]: Feb 12 20:24:08.060 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 12 20:24:08.090638 tar[1056]: ./bandwidth Feb 12 20:24:08.183562 tar[1056]: ./ptp Feb 12 20:24:08.293232 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 12 20:24:08.369428 tar[1056]: ./vlan Feb 12 20:24:08.373034 extend-filesystems[1099]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:24:08.373034 extend-filesystems[1099]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 12 20:24:08.373034 extend-filesystems[1099]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 12 20:24:08.376793 extend-filesystems[1042]: Resized filesystem in /dev/vda9 Feb 12 20:24:08.377474 env[1061]: time="2024-02-12T20:24:08.374363793Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:24:08.377474 env[1061]: time="2024-02-12T20:24:08.374465785Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:24:08.377474 env[1061]: time="2024-02-12T20:24:08.374505008Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:24:08.373824 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:24:08.374242 systemd[1]: Finished extend-filesystems.service. Feb 12 20:24:08.378525 env[1061]: time="2024-02-12T20:24:08.378332517Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378589 env[1061]: time="2024-02-12T20:24:08.378562237Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378663 env[1061]: time="2024-02-12T20:24:08.378608364Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378663 env[1061]: time="2024-02-12T20:24:08.378644842Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378753 env[1061]: time="2024-02-12T20:24:08.378683304Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378753 env[1061]: time="2024-02-12T20:24:08.378719663Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378832 env[1061]: time="2024-02-12T20:24:08.378754528Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378832 env[1061]: time="2024-02-12T20:24:08.378789013Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.378911 env[1061]: time="2024-02-12T20:24:08.378823337Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:24:08.379243 env[1061]: time="2024-02-12T20:24:08.379171360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:24:08.379505 env[1061]: time="2024-02-12T20:24:08.379453299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:24:08.380719 env[1061]: time="2024-02-12T20:24:08.380671704Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:24:08.380777 env[1061]: time="2024-02-12T20:24:08.380750081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.380808 env[1061]: time="2024-02-12T20:24:08.380788833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:24:08.380961 env[1061]: time="2024-02-12T20:24:08.380920210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381126 env[1061]: time="2024-02-12T20:24:08.381084257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381162 env[1061]: time="2024-02-12T20:24:08.381138018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381191 env[1061]: time="2024-02-12T20:24:08.381171130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381306 env[1061]: time="2024-02-12T20:24:08.381267601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381342 env[1061]: time="2024-02-12T20:24:08.381321032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381371 env[1061]: time="2024-02-12T20:24:08.381353753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381420 env[1061]: time="2024-02-12T20:24:08.381387496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381461 env[1061]: time="2024-02-12T20:24:08.381435406Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:24:08.381830 env[1061]: time="2024-02-12T20:24:08.381785332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381875 env[1061]: time="2024-02-12T20:24:08.381844032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381903 env[1061]: time="2024-02-12T20:24:08.381880751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.381934 env[1061]: time="2024-02-12T20:24:08.381914404Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:24:08.382890 env[1061]: time="2024-02-12T20:24:08.381969929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:24:08.382890 env[1061]: time="2024-02-12T20:24:08.382012769Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:24:08.382890 env[1061]: time="2024-02-12T20:24:08.382061911Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:24:08.382890 env[1061]: time="2024-02-12T20:24:08.382143314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:24:08.383043 env[1061]: time="2024-02-12T20:24:08.382671745Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:24:08.383043 env[1061]: time="2024-02-12T20:24:08.382830603Z" level=info msg="Connect containerd service" Feb 12 20:24:08.383043 env[1061]: time="2024-02-12T20:24:08.382899221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:24:08.400460 env[1061]: time="2024-02-12T20:24:08.398651585Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:24:08.400460 env[1061]: time="2024-02-12T20:24:08.399038020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:24:08.400460 env[1061]: time="2024-02-12T20:24:08.399083715Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:24:08.400460 env[1061]: time="2024-02-12T20:24:08.399149439Z" level=info msg="containerd successfully booted in 0.502898s" Feb 12 20:24:08.399379 systemd[1]: Started containerd.service. Feb 12 20:24:08.401404 env[1061]: time="2024-02-12T20:24:08.401366517Z" level=info msg="Start subscribing containerd event" Feb 12 20:24:08.401502 env[1061]: time="2024-02-12T20:24:08.401486271Z" level=info msg="Start recovering state" Feb 12 20:24:08.401620 env[1061]: time="2024-02-12T20:24:08.401606427Z" level=info msg="Start event monitor" Feb 12 20:24:08.401690 env[1061]: time="2024-02-12T20:24:08.401677009Z" level=info msg="Start snapshots syncer" Feb 12 20:24:08.401752 env[1061]: time="2024-02-12T20:24:08.401738535Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:24:08.401815 env[1061]: time="2024-02-12T20:24:08.401801853Z" level=info msg="Start streaming server" Feb 12 20:24:08.519926 tar[1056]: ./host-device Feb 12 20:24:08.578530 coreos-metadata[1037]: Feb 12 20:24:08.578 INFO Fetch successful Feb 12 20:24:08.578530 coreos-metadata[1037]: Feb 12 20:24:08.578 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:24:08.595799 coreos-metadata[1037]: Feb 12 20:24:08.595 INFO Fetch successful Feb 12 20:24:08.600260 unknown[1037]: wrote ssh authorized keys file for user: core Feb 12 20:24:08.636000 update-ssh-keys[1108]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:24:08.636378 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:24:08.642909 tar[1056]: ./tuning Feb 12 20:24:08.715357 tar[1056]: ./vrf Feb 12 20:24:08.790991 tar[1056]: ./sbr Feb 12 20:24:08.873743 tar[1056]: ./tap Feb 12 20:24:08.912797 tar[1059]: linux-amd64/LICENSE Feb 12 20:24:08.912797 tar[1059]: linux-amd64/README.md Feb 12 20:24:08.927702 systemd[1]: Finished prepare-helm.service. Feb 12 20:24:08.951167 tar[1056]: ./dhcp Feb 12 20:24:09.005753 systemd[1]: Finished prepare-critools.service. Feb 12 20:24:09.076492 sshd_keygen[1074]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:24:09.090555 tar[1056]: ./static Feb 12 20:24:09.108897 systemd[1]: Finished sshd-keygen.service. Feb 12 20:24:09.111286 systemd[1]: Starting issuegen.service... Feb 12 20:24:09.118127 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:24:09.118343 systemd[1]: Finished issuegen.service. Feb 12 20:24:09.120554 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:24:09.130472 tar[1056]: ./firewall Feb 12 20:24:09.134805 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:24:09.137671 systemd[1]: Started getty@tty1.service. Feb 12 20:24:09.139877 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:24:09.140764 systemd[1]: Reached target getty.target. Feb 12 20:24:09.176482 tar[1056]: ./macvlan Feb 12 20:24:09.212100 tar[1056]: ./dummy Feb 12 20:24:09.247314 tar[1056]: ./bridge Feb 12 20:24:09.285541 tar[1056]: ./ipvlan Feb 12 20:24:09.323548 tar[1056]: ./portmap Feb 12 20:24:09.357338 tar[1056]: ./host-local Feb 12 20:24:09.367719 locksmithd[1100]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:24:09.453901 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:24:09.456173 systemd[1]: Reached target multi-user.target. Feb 12 20:24:09.460957 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:24:09.479598 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:24:09.479985 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:24:09.516447 systemd[1]: Startup finished in 1.025s (kernel) + 1min 49.379s (initrd) + 8.862s (userspace) = 1min 59.267s. Feb 12 20:24:09.612307 systemd[1]: Created slice system-sshd.slice. Feb 12 20:24:09.614834 systemd[1]: Started sshd@0-172.24.4.211:22-172.24.4.1:58324.service. Feb 12 20:24:10.737925 sshd[1128]: Accepted publickey for core from 172.24.4.1 port 58324 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:24:10.742194 sshd[1128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:10.768929 systemd[1]: Created slice user-500.slice. Feb 12 20:24:10.771460 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:24:10.781397 systemd-logind[1050]: New session 1 of user core. Feb 12 20:24:10.797281 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:24:10.802175 systemd[1]: Starting user@500.service... Feb 12 20:24:10.810176 (systemd)[1131]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:10.934362 systemd[1131]: Queued start job for default target default.target. Feb 12 20:24:10.935122 systemd[1131]: Reached target paths.target. Feb 12 20:24:10.935252 systemd[1131]: Reached target sockets.target. Feb 12 20:24:10.935352 systemd[1131]: Reached target timers.target. Feb 12 20:24:10.935444 systemd[1131]: Reached target basic.target. Feb 12 20:24:10.935564 systemd[1131]: Reached target default.target. Feb 12 20:24:10.935695 systemd[1131]: Startup finished in 111ms. Feb 12 20:24:10.936085 systemd[1]: Started user@500.service. Feb 12 20:24:10.939411 systemd[1]: Started session-1.scope. Feb 12 20:24:11.328614 systemd[1]: Started sshd@1-172.24.4.211:22-172.24.4.1:59818.service. Feb 12 20:24:13.013686 sshd[1140]: Accepted publickey for core from 172.24.4.1 port 59818 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:24:13.016918 sshd[1140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:13.028202 systemd-logind[1050]: New session 2 of user core. Feb 12 20:24:13.028649 systemd[1]: Started session-2.scope. Feb 12 20:24:13.981322 sshd[1140]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:13.989472 systemd[1]: sshd@1-172.24.4.211:22-172.24.4.1:59818.service: Deactivated successfully. Feb 12 20:24:13.991464 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:24:13.993158 systemd-logind[1050]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:24:13.996807 systemd[1]: Started sshd@2-172.24.4.211:22-172.24.4.1:59820.service. Feb 12 20:24:14.000981 systemd-logind[1050]: Removed session 2. Feb 12 20:24:15.434079 sshd[1146]: Accepted publickey for core from 172.24.4.1 port 59820 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:24:15.436472 sshd[1146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:15.448867 systemd[1]: Started session-3.scope. Feb 12 20:24:15.450544 systemd-logind[1050]: New session 3 of user core. Feb 12 20:24:16.159581 sshd[1146]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:16.167967 systemd[1]: Started sshd@3-172.24.4.211:22-172.24.4.1:38422.service. Feb 12 20:24:16.169195 systemd[1]: sshd@2-172.24.4.211:22-172.24.4.1:59820.service: Deactivated successfully. Feb 12 20:24:16.173019 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:24:16.175465 systemd-logind[1050]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:24:16.178464 systemd-logind[1050]: Removed session 3. Feb 12 20:24:17.601984 sshd[1151]: Accepted publickey for core from 172.24.4.1 port 38422 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:24:17.605554 sshd[1151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:17.615334 systemd-logind[1050]: New session 4 of user core. Feb 12 20:24:17.617062 systemd[1]: Started session-4.scope. Feb 12 20:24:18.420076 sshd[1151]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:18.428712 systemd[1]: Started sshd@4-172.24.4.211:22-172.24.4.1:38436.service. Feb 12 20:24:18.431044 systemd[1]: sshd@3-172.24.4.211:22-172.24.4.1:38422.service: Deactivated successfully. Feb 12 20:24:18.432836 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:24:18.436336 systemd-logind[1050]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:24:18.439801 systemd-logind[1050]: Removed session 4. Feb 12 20:24:19.701837 sshd[1157]: Accepted publickey for core from 172.24.4.1 port 38436 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:24:19.704852 sshd[1157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:19.717326 systemd-logind[1050]: New session 5 of user core. Feb 12 20:24:19.718358 systemd[1]: Started session-5.scope. Feb 12 20:24:20.221642 sudo[1162]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:24:20.222113 sudo[1162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:24:20.888410 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:24:20.894920 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:24:20.895426 systemd[1]: Reached target network-online.target. Feb 12 20:24:20.897230 systemd[1]: Starting docker.service... Feb 12 20:24:20.960894 env[1178]: time="2024-02-12T20:24:20.960821672Z" level=info msg="Starting up" Feb 12 20:24:20.963155 env[1178]: time="2024-02-12T20:24:20.963082702Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:24:20.963155 env[1178]: time="2024-02-12T20:24:20.963103822Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:24:20.963155 env[1178]: time="2024-02-12T20:24:20.963123088Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:24:20.963155 env[1178]: time="2024-02-12T20:24:20.963135231Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:24:20.965327 env[1178]: time="2024-02-12T20:24:20.965182981Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:24:20.965327 env[1178]: time="2024-02-12T20:24:20.965232915Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:24:20.965327 env[1178]: time="2024-02-12T20:24:20.965254566Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:24:20.965327 env[1178]: time="2024-02-12T20:24:20.965268903Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:24:20.975114 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2575413900-merged.mount: Deactivated successfully. Feb 12 20:24:21.010603 env[1178]: time="2024-02-12T20:24:21.010574363Z" level=info msg="Loading containers: start." Feb 12 20:24:21.371359 kernel: Initializing XFRM netlink socket Feb 12 20:24:21.447368 env[1178]: time="2024-02-12T20:24:21.447321011Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:24:21.549730 systemd-networkd[978]: docker0: Link UP Feb 12 20:24:21.566154 env[1178]: time="2024-02-12T20:24:21.566103398Z" level=info msg="Loading containers: done." Feb 12 20:24:21.588228 env[1178]: time="2024-02-12T20:24:21.588115601Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:24:21.588478 env[1178]: time="2024-02-12T20:24:21.588329032Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:24:21.588478 env[1178]: time="2024-02-12T20:24:21.588434009Z" level=info msg="Daemon has completed initialization" Feb 12 20:24:21.613004 systemd[1]: Started docker.service. Feb 12 20:24:21.623496 env[1178]: time="2024-02-12T20:24:21.623311491Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:24:21.652182 systemd[1]: Reloading. Feb 12 20:24:21.772429 /usr/lib/systemd/system-generators/torcx-generator[1317]: time="2024-02-12T20:24:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:21.772488 /usr/lib/systemd/system-generators/torcx-generator[1317]: time="2024-02-12T20:24:21Z" level=info msg="torcx already run" Feb 12 20:24:21.851349 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:21.851547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:21.877524 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:21.975618 systemd[1]: Started kubelet.service. Feb 12 20:24:22.073474 kubelet[1362]: E0212 20:24:22.073419 1362 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 20:24:22.076431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:24:22.076556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:24:23.078802 env[1061]: time="2024-02-12T20:24:23.078693480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 12 20:24:23.919858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035901339.mount: Deactivated successfully. Feb 12 20:24:26.709842 env[1061]: time="2024-02-12T20:24:26.709633056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:26.714615 env[1061]: time="2024-02-12T20:24:26.714546291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:26.718485 env[1061]: time="2024-02-12T20:24:26.718423843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:26.723731 env[1061]: time="2024-02-12T20:24:26.723662989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 12 20:24:26.726453 env[1061]: time="2024-02-12T20:24:26.721669260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:26.739574 env[1061]: time="2024-02-12T20:24:26.739521311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 12 20:24:29.688740 env[1061]: time="2024-02-12T20:24:29.688573628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:29.692707 env[1061]: time="2024-02-12T20:24:29.692651236Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:29.697792 env[1061]: time="2024-02-12T20:24:29.697738758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:29.700147 env[1061]: time="2024-02-12T20:24:29.700096099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:29.702514 env[1061]: time="2024-02-12T20:24:29.702436478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 12 20:24:29.724405 env[1061]: time="2024-02-12T20:24:29.724326202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 12 20:24:31.825023 env[1061]: time="2024-02-12T20:24:31.824762139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:31.828497 env[1061]: time="2024-02-12T20:24:31.828441460Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:31.831359 env[1061]: time="2024-02-12T20:24:31.831311202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:31.840285 env[1061]: time="2024-02-12T20:24:31.840175477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:31.840836 env[1061]: time="2024-02-12T20:24:31.840788998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 12 20:24:31.856553 env[1061]: time="2024-02-12T20:24:31.856482701Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 20:24:32.321287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:24:32.321809 systemd[1]: Stopped kubelet.service. Feb 12 20:24:32.325541 systemd[1]: Started kubelet.service. Feb 12 20:24:32.452790 kubelet[1397]: E0212 20:24:32.452629 1397 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 20:24:32.459590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:24:32.459785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:24:33.387412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235847661.mount: Deactivated successfully. Feb 12 20:24:34.269400 env[1061]: time="2024-02-12T20:24:34.268845016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.278319 env[1061]: time="2024-02-12T20:24:34.278275343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.281456 env[1061]: time="2024-02-12T20:24:34.281380747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.283518 env[1061]: time="2024-02-12T20:24:34.283461169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.284142 env[1061]: time="2024-02-12T20:24:34.284110156Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 12 20:24:34.310840 env[1061]: time="2024-02-12T20:24:34.310805012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:24:34.965166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824578040.mount: Deactivated successfully. Feb 12 20:24:34.978851 env[1061]: time="2024-02-12T20:24:34.978778085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.984145 env[1061]: time="2024-02-12T20:24:34.984094767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.989132 env[1061]: time="2024-02-12T20:24:34.989079776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.994098 env[1061]: time="2024-02-12T20:24:34.994047143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:34.997469 env[1061]: time="2024-02-12T20:24:34.995988995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:24:35.021083 env[1061]: time="2024-02-12T20:24:35.021023336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 12 20:24:35.761906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325323042.mount: Deactivated successfully. Feb 12 20:24:42.570778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 20:24:42.571055 systemd[1]: Stopped kubelet.service. Feb 12 20:24:42.573048 systemd[1]: Started kubelet.service. Feb 12 20:24:42.711389 kubelet[1414]: E0212 20:24:42.711337 1414 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 20:24:42.713774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:24:42.713923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:24:43.257094 env[1061]: time="2024-02-12T20:24:43.256998852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:43.262010 env[1061]: time="2024-02-12T20:24:43.261940796Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:43.265465 env[1061]: time="2024-02-12T20:24:43.265419016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:43.270702 env[1061]: time="2024-02-12T20:24:43.270623211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:43.276869 env[1061]: time="2024-02-12T20:24:43.276826696Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 12 20:24:43.297800 env[1061]: time="2024-02-12T20:24:43.297732882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 20:24:43.959649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234609269.mount: Deactivated successfully. Feb 12 20:24:45.549621 env[1061]: time="2024-02-12T20:24:45.549498852Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:45.553099 env[1061]: time="2024-02-12T20:24:45.552970396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:45.556838 env[1061]: time="2024-02-12T20:24:45.556757010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:45.559234 env[1061]: time="2024-02-12T20:24:45.559160636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:45.559911 env[1061]: time="2024-02-12T20:24:45.559871998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 20:24:49.632684 systemd[1]: Stopped kubelet.service. Feb 12 20:24:49.671774 systemd[1]: Reloading. Feb 12 20:24:49.794522 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2024-02-12T20:24:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:49.794555 /usr/lib/systemd/system-generators/torcx-generator[1512]: time="2024-02-12T20:24:49Z" level=info msg="torcx already run" Feb 12 20:24:49.889746 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:49.889961 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:49.915119 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:50.019192 systemd[1]: Started kubelet.service. Feb 12 20:24:50.079676 kubelet[1559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:50.080037 kubelet[1559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:50.080091 kubelet[1559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:50.080246 kubelet[1559]: I0212 20:24:50.080192 1559 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:24:50.724038 kubelet[1559]: I0212 20:24:50.723974 1559 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 20:24:50.724038 kubelet[1559]: I0212 20:24:50.724039 1559 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:24:50.724730 kubelet[1559]: I0212 20:24:50.724691 1559 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 20:24:50.734064 kubelet[1559]: E0212 20:24:50.734048 1559 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.211:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.734194 kubelet[1559]: I0212 20:24:50.734182 1559 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:24:50.739938 kubelet[1559]: I0212 20:24:50.739918 1559 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:24:50.740276 kubelet[1559]: I0212 20:24:50.740264 1559 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:24:50.740509 kubelet[1559]: I0212 20:24:50.740495 1559 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 20:24:50.740640 kubelet[1559]: I0212 20:24:50.740630 1559 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 20:24:50.740708 kubelet[1559]: I0212 20:24:50.740699 1559 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 20:24:50.740856 kubelet[1559]: I0212 20:24:50.740845 1559 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:50.741001 kubelet[1559]: I0212 20:24:50.740990 1559 kubelet.go:393] "Attempting to sync node with API server" Feb 12 20:24:50.741077 kubelet[1559]: I0212 20:24:50.741066 1559 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:24:50.741153 kubelet[1559]: I0212 20:24:50.741143 1559 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:24:50.741257 kubelet[1559]: I0212 20:24:50.741231 1559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:24:50.742667 kubelet[1559]: I0212 20:24:50.742628 1559 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:24:50.743195 kubelet[1559]: W0212 20:24:50.743164 1559 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:24:50.744327 kubelet[1559]: I0212 20:24:50.744292 1559 server.go:1232] "Started kubelet" Feb 12 20:24:50.744596 kubelet[1559]: W0212 20:24:50.744516 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-c19eb846e8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.744647 kubelet[1559]: E0212 20:24:50.744632 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-c19eb846e8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.747127 kubelet[1559]: E0212 20:24:50.747024 1559 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-4-c19eb846e8.novalocal.17b3374dbe0f861c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-4-c19eb846e8.novalocal", UID:"ci-3510-3-2-4-c19eb846e8.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-4-c19eb846e8.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 24, 50, 744247836, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 24, 50, 744247836, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-2-4-c19eb846e8.novalocal"}': 'Post "https://172.24.4.211:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.211:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:24:50.747506 kubelet[1559]: W0212 20:24:50.747471 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.211:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.747594 kubelet[1559]: E0212 20:24:50.747584 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.211:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.748257 kubelet[1559]: E0212 20:24:50.748244 1559 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:24:50.748339 kubelet[1559]: E0212 20:24:50.748330 1559 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:24:50.748701 kubelet[1559]: I0212 20:24:50.748690 1559 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:24:50.749187 kubelet[1559]: I0212 20:24:50.749174 1559 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 20:24:50.749323 kubelet[1559]: I0212 20:24:50.749313 1559 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:24:50.752925 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:24:50.754690 kubelet[1559]: I0212 20:24:50.754674 1559 server.go:462] "Adding debug handlers to kubelet server" Feb 12 20:24:50.759158 kubelet[1559]: I0212 20:24:50.759139 1559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:24:50.766220 kubelet[1559]: E0212 20:24:50.763803 1559 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-4-c19eb846e8.novalocal\" not found" Feb 12 20:24:50.766220 kubelet[1559]: I0212 20:24:50.764414 1559 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 20:24:50.771547 kubelet[1559]: I0212 20:24:50.771505 1559 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:24:50.771710 kubelet[1559]: I0212 20:24:50.771681 1559 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 20:24:50.773150 kubelet[1559]: W0212 20:24:50.773068 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.773272 kubelet[1559]: E0212 20:24:50.773175 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.773735 kubelet[1559]: E0212 20:24:50.773692 1559 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-c19eb846e8.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="200ms" Feb 12 20:24:50.814163 kubelet[1559]: I0212 20:24:50.814102 1559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 20:24:50.819853 kubelet[1559]: I0212 20:24:50.819799 1559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 20:24:50.819953 kubelet[1559]: I0212 20:24:50.819900 1559 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 20:24:50.819995 kubelet[1559]: I0212 20:24:50.819958 1559 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 20:24:50.820136 kubelet[1559]: E0212 20:24:50.820092 1559 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:24:50.821233 kubelet[1559]: W0212 20:24:50.821184 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.821346 kubelet[1559]: E0212 20:24:50.821333 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:50.829982 kubelet[1559]: I0212 20:24:50.829963 1559 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:24:50.830104 kubelet[1559]: I0212 20:24:50.830093 1559 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:24:50.830170 kubelet[1559]: I0212 20:24:50.830161 1559 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:50.834447 kubelet[1559]: I0212 20:24:50.834430 1559 policy_none.go:49] "None policy: Start" Feb 12 20:24:50.835158 kubelet[1559]: I0212 20:24:50.835143 1559 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:24:50.835342 kubelet[1559]: I0212 20:24:50.835329 1559 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:24:50.841332 systemd[1]: Created slice kubepods.slice. Feb 12 20:24:50.846108 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:24:50.849135 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:24:50.854923 kubelet[1559]: I0212 20:24:50.854888 1559 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:24:50.855131 kubelet[1559]: I0212 20:24:50.855103 1559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:24:50.856399 kubelet[1559]: E0212 20:24:50.856190 1559 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-4-c19eb846e8.novalocal\" not found" Feb 12 20:24:50.873633 kubelet[1559]: I0212 20:24:50.873616 1559 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:50.874046 kubelet[1559]: E0212 20:24:50.874031 1559 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:50.920508 kubelet[1559]: I0212 20:24:50.920420 1559 topology_manager.go:215] "Topology Admit Handler" podUID="c7191cf42282ad791bcb11d86df519c3" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:50.924003 kubelet[1559]: I0212 20:24:50.923950 1559 topology_manager.go:215] "Topology Admit Handler" podUID="ab7acd40d0d9ee62fe6346e04ae74794" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:50.927577 kubelet[1559]: I0212 20:24:50.927536 1559 topology_manager.go:215] "Topology Admit Handler" podUID="7f56f79564aff83260cd11021c30b9b4" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:50.940912 systemd[1]: Created slice kubepods-burstable-podc7191cf42282ad791bcb11d86df519c3.slice. Feb 12 20:24:50.963422 systemd[1]: Created slice kubepods-burstable-pod7f56f79564aff83260cd11021c30b9b4.slice. Feb 12 20:24:50.972684 systemd[1]: Created slice kubepods-burstable-podab7acd40d0d9ee62fe6346e04ae74794.slice. Feb 12 20:24:50.979354 kubelet[1559]: E0212 20:24:50.975031 1559 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-c19eb846e8.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="400ms" Feb 12 20:24:51.072672 kubelet[1559]: I0212 20:24:51.072591 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7191cf42282ad791bcb11d86df519c3-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"c7191cf42282ad791bcb11d86df519c3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.073337 kubelet[1559]: I0212 20:24:51.073309 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7191cf42282ad791bcb11d86df519c3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"c7191cf42282ad791bcb11d86df519c3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.073730 kubelet[1559]: I0212 20:24:51.073668 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.074120 kubelet[1559]: I0212 20:24:51.074061 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7191cf42282ad791bcb11d86df519c3-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"c7191cf42282ad791bcb11d86df519c3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.075533 kubelet[1559]: I0212 20:24:51.075031 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.075533 kubelet[1559]: I0212 20:24:51.075158 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.075533 kubelet[1559]: I0212 20:24:51.075311 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.075533 kubelet[1559]: I0212 20:24:51.075388 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.075898 kubelet[1559]: I0212 20:24:51.075448 1559 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f56f79564aff83260cd11021c30b9b4-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"7f56f79564aff83260cd11021c30b9b4\") " pod="kube-system/kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.077814 kubelet[1559]: I0212 20:24:51.077757 1559 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.078918 kubelet[1559]: E0212 20:24:51.078869 1559 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.262333 env[1061]: time="2024-02-12T20:24:51.262050075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal,Uid:c7191cf42282ad791bcb11d86df519c3,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:51.271365 env[1061]: time="2024-02-12T20:24:51.271283654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal,Uid:7f56f79564aff83260cd11021c30b9b4,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:51.285398 env[1061]: time="2024-02-12T20:24:51.285273583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal,Uid:ab7acd40d0d9ee62fe6346e04ae74794,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:51.376405 kubelet[1559]: E0212 20:24:51.376305 1559 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-c19eb846e8.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="800ms" Feb 12 20:24:51.483902 kubelet[1559]: I0212 20:24:51.483846 1559 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.485099 kubelet[1559]: E0212 20:24:51.485017 1559 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:51.715170 kubelet[1559]: W0212 20:24:51.715020 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:51.715170 kubelet[1559]: E0212 20:24:51.715118 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:51.856587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount821513522.mount: Deactivated successfully. Feb 12 20:24:51.869887 env[1061]: time="2024-02-12T20:24:51.869574215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.873490 env[1061]: time="2024-02-12T20:24:51.873402488Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.877455 env[1061]: time="2024-02-12T20:24:51.877400653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.879522 env[1061]: time="2024-02-12T20:24:51.879471290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.883534 env[1061]: time="2024-02-12T20:24:51.883480635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.888363 env[1061]: time="2024-02-12T20:24:51.888290276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.895479 env[1061]: time="2024-02-12T20:24:51.895419123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.914278 env[1061]: time="2024-02-12T20:24:51.914166082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.916508 env[1061]: time="2024-02-12T20:24:51.916454621Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.926567 env[1061]: time="2024-02-12T20:24:51.926511869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.928388 env[1061]: time="2024-02-12T20:24:51.928349286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.934267 env[1061]: time="2024-02-12T20:24:51.934239209Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:51.940234 env[1061]: time="2024-02-12T20:24:51.939999217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:51.940234 env[1061]: time="2024-02-12T20:24:51.940156916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:51.940234 env[1061]: time="2024-02-12T20:24:51.940173758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:51.941170 env[1061]: time="2024-02-12T20:24:51.940973060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b21415fc4dd9ac54f9a62a2e8bfee3b2cbc3e38828465ca50987092fb0a95a3 pid=1598 runtime=io.containerd.runc.v2 Feb 12 20:24:51.973429 systemd[1]: Started cri-containerd-7b21415fc4dd9ac54f9a62a2e8bfee3b2cbc3e38828465ca50987092fb0a95a3.scope. Feb 12 20:24:51.995992 env[1061]: time="2024-02-12T20:24:51.988178731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:51.995992 env[1061]: time="2024-02-12T20:24:51.988310552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:51.995992 env[1061]: time="2024-02-12T20:24:51.988327724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:51.995992 env[1061]: time="2024-02-12T20:24:51.988542850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c30c07e46ec6d8c0a5fef88106bd2082a5b9a244528e97f6ecbf77374891c337 pid=1634 runtime=io.containerd.runc.v2 Feb 12 20:24:52.003898 env[1061]: time="2024-02-12T20:24:52.003817127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:52.004137 env[1061]: time="2024-02-12T20:24:52.004111883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:52.004288 env[1061]: time="2024-02-12T20:24:52.004264192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:52.004572 env[1061]: time="2024-02-12T20:24:52.004541476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f71e10f05da385b2396a43d845ee4f78e74d6d33b72726c4e67604d546c17960 pid=1649 runtime=io.containerd.runc.v2 Feb 12 20:24:52.021306 systemd[1]: Started cri-containerd-c30c07e46ec6d8c0a5fef88106bd2082a5b9a244528e97f6ecbf77374891c337.scope. Feb 12 20:24:52.032926 kubelet[1559]: W0212 20:24:52.032794 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.211:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:52.032926 kubelet[1559]: E0212 20:24:52.032889 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.211:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:52.038973 env[1061]: time="2024-02-12T20:24:52.038922990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal,Uid:7f56f79564aff83260cd11021c30b9b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b21415fc4dd9ac54f9a62a2e8bfee3b2cbc3e38828465ca50987092fb0a95a3\"" Feb 12 20:24:52.045928 env[1061]: time="2024-02-12T20:24:52.045885506Z" level=info msg="CreateContainer within sandbox \"7b21415fc4dd9ac54f9a62a2e8bfee3b2cbc3e38828465ca50987092fb0a95a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:24:52.071366 systemd[1]: Started cri-containerd-f71e10f05da385b2396a43d845ee4f78e74d6d33b72726c4e67604d546c17960.scope. Feb 12 20:24:52.102447 env[1061]: time="2024-02-12T20:24:52.102379092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal,Uid:ab7acd40d0d9ee62fe6346e04ae74794,Namespace:kube-system,Attempt:0,} returns sandbox id \"c30c07e46ec6d8c0a5fef88106bd2082a5b9a244528e97f6ecbf77374891c337\"" Feb 12 20:24:52.110537 env[1061]: time="2024-02-12T20:24:52.110491221Z" level=info msg="CreateContainer within sandbox \"c30c07e46ec6d8c0a5fef88106bd2082a5b9a244528e97f6ecbf77374891c337\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:24:52.130249 env[1061]: time="2024-02-12T20:24:52.130165615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal,Uid:c7191cf42282ad791bcb11d86df519c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f71e10f05da385b2396a43d845ee4f78e74d6d33b72726c4e67604d546c17960\"" Feb 12 20:24:52.133884 env[1061]: time="2024-02-12T20:24:52.133840053Z" level=info msg="CreateContainer within sandbox \"f71e10f05da385b2396a43d845ee4f78e74d6d33b72726c4e67604d546c17960\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:24:52.163902 env[1061]: time="2024-02-12T20:24:52.163791409Z" level=info msg="CreateContainer within sandbox \"7b21415fc4dd9ac54f9a62a2e8bfee3b2cbc3e38828465ca50987092fb0a95a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"492630e1c5dc91a17f2569abe93b2314b058dcac5b0a621274d30bf2084a7715\"" Feb 12 20:24:52.165002 env[1061]: time="2024-02-12T20:24:52.164947345Z" level=info msg="StartContainer for \"492630e1c5dc91a17f2569abe93b2314b058dcac5b0a621274d30bf2084a7715\"" Feb 12 20:24:52.177763 kubelet[1559]: E0212 20:24:52.177717 1559 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-c19eb846e8.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="1.6s" Feb 12 20:24:52.185863 kubelet[1559]: W0212 20:24:52.185574 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-c19eb846e8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:52.185863 kubelet[1559]: E0212 20:24:52.185798 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-c19eb846e8.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:52.194570 systemd[1]: Started cri-containerd-492630e1c5dc91a17f2569abe93b2314b058dcac5b0a621274d30bf2084a7715.scope. Feb 12 20:24:52.204491 env[1061]: time="2024-02-12T20:24:52.204432509Z" level=info msg="CreateContainer within sandbox \"c30c07e46ec6d8c0a5fef88106bd2082a5b9a244528e97f6ecbf77374891c337\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ad4650aca7dc3584aea3086c7a831a89f25c279040560bfcc85f912e5a3d66bf\"" Feb 12 20:24:52.205398 env[1061]: time="2024-02-12T20:24:52.205322011Z" level=info msg="StartContainer for \"ad4650aca7dc3584aea3086c7a831a89f25c279040560bfcc85f912e5a3d66bf\"" Feb 12 20:24:52.220026 env[1061]: time="2024-02-12T20:24:52.219869600Z" level=info msg="CreateContainer within sandbox \"f71e10f05da385b2396a43d845ee4f78e74d6d33b72726c4e67604d546c17960\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"37d1532a1d0832178fe8b78cbc14bbee112b24b90e561f2cfee3878edc812c34\"" Feb 12 20:24:52.221136 env[1061]: time="2024-02-12T20:24:52.221115105Z" level=info msg="StartContainer for \"37d1532a1d0832178fe8b78cbc14bbee112b24b90e561f2cfee3878edc812c34\"" Feb 12 20:24:52.240885 systemd[1]: Started cri-containerd-ad4650aca7dc3584aea3086c7a831a89f25c279040560bfcc85f912e5a3d66bf.scope. Feb 12 20:24:52.245465 kubelet[1559]: W0212 20:24:52.242928 1559 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:52.245465 kubelet[1559]: E0212 20:24:52.242970 1559 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:52.272835 systemd[1]: Started cri-containerd-37d1532a1d0832178fe8b78cbc14bbee112b24b90e561f2cfee3878edc812c34.scope. Feb 12 20:24:52.275692 env[1061]: time="2024-02-12T20:24:52.273733564Z" level=info msg="StartContainer for \"492630e1c5dc91a17f2569abe93b2314b058dcac5b0a621274d30bf2084a7715\" returns successfully" Feb 12 20:24:52.287487 kubelet[1559]: I0212 20:24:52.287433 1559 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:52.289232 kubelet[1559]: E0212 20:24:52.288177 1559 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:52.329441 env[1061]: time="2024-02-12T20:24:52.329375219Z" level=info msg="StartContainer for \"ad4650aca7dc3584aea3086c7a831a89f25c279040560bfcc85f912e5a3d66bf\" returns successfully" Feb 12 20:24:52.362482 env[1061]: time="2024-02-12T20:24:52.362429573Z" level=info msg="StartContainer for \"37d1532a1d0832178fe8b78cbc14bbee112b24b90e561f2cfee3878edc812c34\" returns successfully" Feb 12 20:24:52.748610 kubelet[1559]: E0212 20:24:52.748582 1559 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.211:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.211:6443: connect: connection refused Feb 12 20:24:53.250933 update_engine[1052]: I0212 20:24:53.250257 1052 update_attempter.cc:509] Updating boot flags... Feb 12 20:24:53.891335 kubelet[1559]: I0212 20:24:53.891294 1559 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:55.700374 kubelet[1559]: E0212 20:24:55.700325 1559 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-4-c19eb846e8.novalocal\" not found" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:55.746970 kubelet[1559]: I0212 20:24:55.746917 1559 apiserver.go:52] "Watching apiserver" Feb 12 20:24:55.772493 kubelet[1559]: I0212 20:24:55.772437 1559 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:24:55.788504 kubelet[1559]: I0212 20:24:55.788455 1559 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:57.019136 kubelet[1559]: W0212 20:24:57.019048 1559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 20:24:58.390834 kubelet[1559]: W0212 20:24:58.389890 1559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 20:24:58.700086 systemd[1]: Reloading. Feb 12 20:24:58.818392 /usr/lib/systemd/system-generators/torcx-generator[1868]: time="2024-02-12T20:24:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:24:58.818470 /usr/lib/systemd/system-generators/torcx-generator[1868]: time="2024-02-12T20:24:58Z" level=info msg="torcx already run" Feb 12 20:24:58.910223 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:24:58.910432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:24:58.936271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:24:59.063469 systemd[1]: Stopping kubelet.service... Feb 12 20:24:59.081978 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:24:59.082315 systemd[1]: Stopped kubelet.service. Feb 12 20:24:59.082446 systemd[1]: kubelet.service: Consumed 1.196s CPU time. Feb 12 20:24:59.085197 systemd[1]: Started kubelet.service. Feb 12 20:24:59.214311 kubelet[1913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:59.214311 kubelet[1913]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:24:59.214311 kubelet[1913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:24:59.214311 kubelet[1913]: I0212 20:24:59.214279 1913 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:24:59.221656 kubelet[1913]: I0212 20:24:59.221611 1913 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 20:24:59.221656 kubelet[1913]: I0212 20:24:59.221641 1913 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:24:59.222088 kubelet[1913]: I0212 20:24:59.221843 1913 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 20:24:59.223579 kubelet[1913]: I0212 20:24:59.223553 1913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:24:59.225015 kubelet[1913]: I0212 20:24:59.224992 1913 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:24:59.235117 kubelet[1913]: I0212 20:24:59.235083 1913 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:24:59.235640 kubelet[1913]: I0212 20:24:59.235578 1913 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:24:59.236075 kubelet[1913]: I0212 20:24:59.236058 1913 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 20:24:59.236614 kubelet[1913]: I0212 20:24:59.236574 1913 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 20:24:59.236614 kubelet[1913]: I0212 20:24:59.236611 1913 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 20:24:59.236740 kubelet[1913]: I0212 20:24:59.236668 1913 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:59.236783 kubelet[1913]: I0212 20:24:59.236764 1913 kubelet.go:393] "Attempting to sync node with API server" Feb 12 20:24:59.236783 kubelet[1913]: I0212 20:24:59.236783 1913 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:24:59.236952 kubelet[1913]: I0212 20:24:59.236812 1913 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:24:59.236952 kubelet[1913]: I0212 20:24:59.236831 1913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:24:59.241321 kubelet[1913]: I0212 20:24:59.241302 1913 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:24:59.241984 kubelet[1913]: I0212 20:24:59.241970 1913 server.go:1232] "Started kubelet" Feb 12 20:24:59.242511 sudo[1925]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:24:59.242748 sudo[1925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:24:59.245271 kubelet[1913]: I0212 20:24:59.245256 1913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:24:59.266620 kubelet[1913]: I0212 20:24:59.266592 1913 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:24:59.267800 kubelet[1913]: I0212 20:24:59.267784 1913 server.go:462] "Adding debug handlers to kubelet server" Feb 12 20:24:59.268996 kubelet[1913]: E0212 20:24:59.268965 1913 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:24:59.269054 kubelet[1913]: E0212 20:24:59.269004 1913 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:24:59.274034 kubelet[1913]: I0212 20:24:59.273248 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 20:24:59.274034 kubelet[1913]: I0212 20:24:59.273970 1913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 20:24:59.274034 kubelet[1913]: I0212 20:24:59.274025 1913 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 20:24:59.274258 kubelet[1913]: I0212 20:24:59.274046 1913 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 20:24:59.274258 kubelet[1913]: E0212 20:24:59.274093 1913 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:24:59.275078 kubelet[1913]: I0212 20:24:59.267980 1913 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:24:59.279230 kubelet[1913]: I0212 20:24:59.275488 1913 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 20:24:59.279230 kubelet[1913]: I0212 20:24:59.275866 1913 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:24:59.279230 kubelet[1913]: I0212 20:24:59.276004 1913 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 20:24:59.279230 kubelet[1913]: I0212 20:24:59.276348 1913 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 20:24:59.363164 kubelet[1913]: I0212 20:24:59.363129 1913 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:24:59.363164 kubelet[1913]: I0212 20:24:59.363156 1913 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:24:59.363164 kubelet[1913]: I0212 20:24:59.363175 1913 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:24:59.363453 kubelet[1913]: I0212 20:24:59.363419 1913 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:24:59.363453 kubelet[1913]: I0212 20:24:59.363443 1913 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 12 20:24:59.363453 kubelet[1913]: I0212 20:24:59.363450 1913 policy_none.go:49] "None policy: Start" Feb 12 20:24:59.364002 kubelet[1913]: I0212 20:24:59.363980 1913 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:24:59.364002 kubelet[1913]: I0212 20:24:59.364004 1913 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:24:59.364143 kubelet[1913]: I0212 20:24:59.364122 1913 state_mem.go:75] "Updated machine memory state" Feb 12 20:24:59.369050 kubelet[1913]: I0212 20:24:59.369032 1913 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:24:59.369377 kubelet[1913]: I0212 20:24:59.369364 1913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:24:59.374992 kubelet[1913]: I0212 20:24:59.374969 1913 topology_manager.go:215] "Topology Admit Handler" podUID="7f56f79564aff83260cd11021c30b9b4" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.375287 kubelet[1913]: I0212 20:24:59.375273 1913 topology_manager.go:215] "Topology Admit Handler" podUID="c7191cf42282ad791bcb11d86df519c3" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.375637 kubelet[1913]: I0212 20:24:59.375621 1913 topology_manager.go:215] "Topology Admit Handler" podUID="ab7acd40d0d9ee62fe6346e04ae74794" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.386506 kubelet[1913]: W0212 20:24:59.386471 1913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 20:24:59.391550 kubelet[1913]: I0212 20:24:59.391381 1913 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.401371 kubelet[1913]: W0212 20:24:59.401347 1913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 20:24:59.401591 kubelet[1913]: W0212 20:24:59.401577 1913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 20:24:59.401782 kubelet[1913]: E0212 20:24:59.401769 1913 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.402033 kubelet[1913]: E0212 20:24:59.402007 1913 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.408586 kubelet[1913]: I0212 20:24:59.408537 1913 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.408713 kubelet[1913]: I0212 20:24:59.408629 1913 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.485771 kubelet[1913]: I0212 20:24:59.485712 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f56f79564aff83260cd11021c30b9b4-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"7f56f79564aff83260cd11021c30b9b4\") " pod="kube-system/kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.485988 kubelet[1913]: I0212 20:24:59.485976 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7191cf42282ad791bcb11d86df519c3-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"c7191cf42282ad791bcb11d86df519c3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486113 kubelet[1913]: I0212 20:24:59.486100 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7191cf42282ad791bcb11d86df519c3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"c7191cf42282ad791bcb11d86df519c3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486265 kubelet[1913]: I0212 20:24:59.486253 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486404 kubelet[1913]: I0212 20:24:59.486392 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486523 kubelet[1913]: I0212 20:24:59.486511 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486640 kubelet[1913]: I0212 20:24:59.486628 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486768 kubelet[1913]: I0212 20:24:59.486757 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab7acd40d0d9ee62fe6346e04ae74794-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"ab7acd40d0d9ee62fe6346e04ae74794\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.486890 kubelet[1913]: I0212 20:24:59.486879 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7191cf42282ad791bcb11d86df519c3-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal\" (UID: \"c7191cf42282ad791bcb11d86df519c3\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" Feb 12 20:24:59.941394 sudo[1925]: pam_unix(sudo:session): session closed for user root Feb 12 20:25:00.252623 kubelet[1913]: I0212 20:25:00.252496 1913 apiserver.go:52] "Watching apiserver" Feb 12 20:25:00.276352 kubelet[1913]: I0212 20:25:00.276321 1913 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:25:00.325585 kubelet[1913]: I0212 20:25:00.325544 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-4-c19eb846e8.novalocal" podStartSLOduration=1.325504093 podCreationTimestamp="2024-02-12 20:24:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:00.320591364 +0000 UTC m=+1.216475626" watchObservedRunningTime="2024-02-12 20:25:00.325504093 +0000 UTC m=+1.221388355" Feb 12 20:25:00.339423 kubelet[1913]: I0212 20:25:00.339383 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-4-c19eb846e8.novalocal" podStartSLOduration=2.339291639 podCreationTimestamp="2024-02-12 20:24:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:00.329666137 +0000 UTC m=+1.225550400" watchObservedRunningTime="2024-02-12 20:25:00.339291639 +0000 UTC m=+1.235175911" Feb 12 20:25:02.018348 sudo[1162]: pam_unix(sudo:session): session closed for user root Feb 12 20:25:02.253288 sshd[1157]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:02.256787 systemd-logind[1050]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:25:02.257424 systemd[1]: sshd@4-172.24.4.211:22-172.24.4.1:38436.service: Deactivated successfully. Feb 12 20:25:02.258358 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:25:02.258539 systemd[1]: session-5.scope: Consumed 6.267s CPU time. Feb 12 20:25:02.261581 systemd-logind[1050]: Removed session 5. Feb 12 20:25:07.053623 kubelet[1913]: I0212 20:25:07.053495 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-4-c19eb846e8.novalocal" podStartSLOduration=10.05328649 podCreationTimestamp="2024-02-12 20:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:00.339782463 +0000 UTC m=+1.235666726" watchObservedRunningTime="2024-02-12 20:25:07.05328649 +0000 UTC m=+7.949170802" Feb 12 20:25:11.256617 kubelet[1913]: I0212 20:25:11.256593 1913 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:25:11.257617 env[1061]: time="2024-02-12T20:25:11.257548957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:25:11.258120 kubelet[1913]: I0212 20:25:11.258104 1913 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:25:11.760445 kubelet[1913]: I0212 20:25:11.760352 1913 topology_manager.go:215] "Topology Admit Handler" podUID="f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7" podNamespace="kube-system" podName="kube-proxy-lnr2d" Feb 12 20:25:11.769392 systemd[1]: Created slice kubepods-besteffort-podf0ed1bba_d4e3_4c9f_a90a_3bf3fc589ce7.slice. Feb 12 20:25:11.780798 kubelet[1913]: I0212 20:25:11.780760 1913 topology_manager.go:215] "Topology Admit Handler" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" podNamespace="kube-system" podName="cilium-md9s8" Feb 12 20:25:11.784441 kubelet[1913]: W0212 20:25:11.784417 1913 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-2-4-c19eb846e8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-4-c19eb846e8.novalocal' and this object Feb 12 20:25:11.784599 kubelet[1913]: E0212 20:25:11.784586 1913 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-2-4-c19eb846e8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-4-c19eb846e8.novalocal' and this object Feb 12 20:25:11.787071 systemd[1]: Created slice kubepods-burstable-pod7ef1117f_243e_434a_80e9_0d5771b1fe67.slice. Feb 12 20:25:11.789362 kubelet[1913]: W0212 20:25:11.789341 1913 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-4-c19eb846e8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-4-c19eb846e8.novalocal' and this object Feb 12 20:25:11.789491 kubelet[1913]: E0212 20:25:11.789478 1913 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-4-c19eb846e8.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-4-c19eb846e8.novalocal' and this object Feb 12 20:25:11.874308 kubelet[1913]: I0212 20:25:11.874278 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-hubble-tls\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.874522 kubelet[1913]: I0212 20:25:11.874510 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-config-path\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.874643 kubelet[1913]: I0212 20:25:11.874631 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-bpf-maps\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.874759 kubelet[1913]: I0212 20:25:11.874748 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-lib-modules\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.874873 kubelet[1913]: I0212 20:25:11.874862 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef1117f-243e-434a-80e9-0d5771b1fe67-clustermesh-secrets\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.874994 kubelet[1913]: I0212 20:25:11.874983 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-cgroup\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.875118 kubelet[1913]: I0212 20:25:11.875107 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7-lib-modules\") pod \"kube-proxy-lnr2d\" (UID: \"f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7\") " pod="kube-system/kube-proxy-lnr2d" Feb 12 20:25:11.875272 kubelet[1913]: I0212 20:25:11.875260 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-hostproc\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.875391 kubelet[1913]: I0212 20:25:11.875380 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzs67\" (UniqueName: \"kubernetes.io/projected/f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7-kube-api-access-lzs67\") pod \"kube-proxy-lnr2d\" (UID: \"f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7\") " pod="kube-system/kube-proxy-lnr2d" Feb 12 20:25:11.875509 kubelet[1913]: I0212 20:25:11.875498 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7-kube-proxy\") pod \"kube-proxy-lnr2d\" (UID: \"f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7\") " pod="kube-system/kube-proxy-lnr2d" Feb 12 20:25:11.875625 kubelet[1913]: I0212 20:25:11.875614 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-run\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.875736 kubelet[1913]: I0212 20:25:11.875725 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cni-path\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.875851 kubelet[1913]: I0212 20:25:11.875839 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-etc-cni-netd\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.875982 kubelet[1913]: I0212 20:25:11.875952 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-xtables-lock\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.876155 kubelet[1913]: I0212 20:25:11.876144 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-net\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.876297 kubelet[1913]: I0212 20:25:11.876285 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbh7k\" (UniqueName: \"kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-kube-api-access-zbh7k\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.876414 kubelet[1913]: I0212 20:25:11.876403 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-kernel\") pod \"cilium-md9s8\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " pod="kube-system/cilium-md9s8" Feb 12 20:25:11.876521 kubelet[1913]: I0212 20:25:11.876510 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7-xtables-lock\") pod \"kube-proxy-lnr2d\" (UID: \"f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7\") " pod="kube-system/kube-proxy-lnr2d" Feb 12 20:25:12.199243 kubelet[1913]: I0212 20:25:12.199165 1913 topology_manager.go:215] "Topology Admit Handler" podUID="ee8e7d06-fbb8-43fa-a25b-766d0adc8f76" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-qms4x" Feb 12 20:25:12.207154 systemd[1]: Created slice kubepods-besteffort-podee8e7d06_fbb8_43fa_a25b_766d0adc8f76.slice. Feb 12 20:25:12.279939 kubelet[1913]: I0212 20:25:12.279886 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg2gt\" (UniqueName: \"kubernetes.io/projected/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-kube-api-access-wg2gt\") pod \"cilium-operator-6bc8ccdb58-qms4x\" (UID: \"ee8e7d06-fbb8-43fa-a25b-766d0adc8f76\") " pod="kube-system/cilium-operator-6bc8ccdb58-qms4x" Feb 12 20:25:12.280392 kubelet[1913]: I0212 20:25:12.279958 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-qms4x\" (UID: \"ee8e7d06-fbb8-43fa-a25b-766d0adc8f76\") " pod="kube-system/cilium-operator-6bc8ccdb58-qms4x" Feb 12 20:25:13.019063 kubelet[1913]: E0212 20:25:13.019022 1913 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:25:13.019534 kubelet[1913]: E0212 20:25:13.019067 1913 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:25:13.020084 kubelet[1913]: E0212 20:25:13.020063 1913 projected.go:198] Error preparing data for projected volume kube-api-access-lzs67 for pod kube-system/kube-proxy-lnr2d: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:25:13.020336 kubelet[1913]: E0212 20:25:13.020319 1913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7-kube-api-access-lzs67 podName:f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7 nodeName:}" failed. No retries permitted until 2024-02-12 20:25:13.520285422 +0000 UTC m=+14.416169694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lzs67" (UniqueName: "kubernetes.io/projected/f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7-kube-api-access-lzs67") pod "kube-proxy-lnr2d" (UID: "f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:25:13.020481 kubelet[1913]: E0212 20:25:13.020102 1913 projected.go:198] Error preparing data for projected volume kube-api-access-zbh7k for pod kube-system/cilium-md9s8: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:25:13.020620 kubelet[1913]: E0212 20:25:13.020600 1913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-kube-api-access-zbh7k podName:7ef1117f-243e-434a-80e9-0d5771b1fe67 nodeName:}" failed. No retries permitted until 2024-02-12 20:25:13.520582231 +0000 UTC m=+14.416466663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zbh7k" (UniqueName: "kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-kube-api-access-zbh7k") pod "cilium-md9s8" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:25:13.715800 env[1061]: time="2024-02-12T20:25:13.715681377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qms4x,Uid:ee8e7d06-fbb8-43fa-a25b-766d0adc8f76,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:13.776931 env[1061]: time="2024-02-12T20:25:13.776661609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:13.777354 env[1061]: time="2024-02-12T20:25:13.776857828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:13.777354 env[1061]: time="2024-02-12T20:25:13.776972142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:13.778364 env[1061]: time="2024-02-12T20:25:13.777753530Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0 pid=1995 runtime=io.containerd.runc.v2 Feb 12 20:25:13.834749 systemd[1]: Started cri-containerd-f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0.scope. Feb 12 20:25:13.881249 env[1061]: time="2024-02-12T20:25:13.880942002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lnr2d,Uid:f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:13.891758 env[1061]: time="2024-02-12T20:25:13.891679904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-md9s8,Uid:7ef1117f-243e-434a-80e9-0d5771b1fe67,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:13.901639 env[1061]: time="2024-02-12T20:25:13.901567236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-qms4x,Uid:ee8e7d06-fbb8-43fa-a25b-766d0adc8f76,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0\"" Feb 12 20:25:13.905403 env[1061]: time="2024-02-12T20:25:13.905364663Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:25:13.959927 env[1061]: time="2024-02-12T20:25:13.956560035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:13.959927 env[1061]: time="2024-02-12T20:25:13.956620228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:13.959927 env[1061]: time="2024-02-12T20:25:13.956634564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:13.959927 env[1061]: time="2024-02-12T20:25:13.956891407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da76c5ebea55043ab9f12f365466a9f9a07dce20d63c6dd1675386c7afe96f22 pid=2045 runtime=io.containerd.runc.v2 Feb 12 20:25:13.961454 env[1061]: time="2024-02-12T20:25:13.955088769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:13.961454 env[1061]: time="2024-02-12T20:25:13.955162157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:13.961454 env[1061]: time="2024-02-12T20:25:13.955175542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:13.961454 env[1061]: time="2024-02-12T20:25:13.955556669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb pid=2044 runtime=io.containerd.runc.v2 Feb 12 20:25:13.978535 systemd[1]: Started cri-containerd-6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb.scope. Feb 12 20:25:13.991657 systemd[1]: Started cri-containerd-da76c5ebea55043ab9f12f365466a9f9a07dce20d63c6dd1675386c7afe96f22.scope. Feb 12 20:25:14.044798 env[1061]: time="2024-02-12T20:25:14.043173713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lnr2d,Uid:f0ed1bba-d4e3-4c9f-a90a-3bf3fc589ce7,Namespace:kube-system,Attempt:0,} returns sandbox id \"da76c5ebea55043ab9f12f365466a9f9a07dce20d63c6dd1675386c7afe96f22\"" Feb 12 20:25:14.051050 env[1061]: time="2024-02-12T20:25:14.051004158Z" level=info msg="CreateContainer within sandbox \"da76c5ebea55043ab9f12f365466a9f9a07dce20d63c6dd1675386c7afe96f22\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:25:14.056542 env[1061]: time="2024-02-12T20:25:14.056487914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-md9s8,Uid:7ef1117f-243e-434a-80e9-0d5771b1fe67,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\"" Feb 12 20:25:14.091998 env[1061]: time="2024-02-12T20:25:14.091902185Z" level=info msg="CreateContainer within sandbox \"da76c5ebea55043ab9f12f365466a9f9a07dce20d63c6dd1675386c7afe96f22\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"355afa081ca1e4ac7afb70243aa9e41a6ee6b939b237907f3c399e1b25ee43ed\"" Feb 12 20:25:14.093903 env[1061]: time="2024-02-12T20:25:14.093840788Z" level=info msg="StartContainer for \"355afa081ca1e4ac7afb70243aa9e41a6ee6b939b237907f3c399e1b25ee43ed\"" Feb 12 20:25:14.135991 systemd[1]: Started cri-containerd-355afa081ca1e4ac7afb70243aa9e41a6ee6b939b237907f3c399e1b25ee43ed.scope. Feb 12 20:25:14.188323 env[1061]: time="2024-02-12T20:25:14.188252670Z" level=info msg="StartContainer for \"355afa081ca1e4ac7afb70243aa9e41a6ee6b939b237907f3c399e1b25ee43ed\" returns successfully" Feb 12 20:25:14.355893 kubelet[1913]: I0212 20:25:14.355825 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lnr2d" podStartSLOduration=3.3557020570000002 podCreationTimestamp="2024-02-12 20:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:14.355074907 +0000 UTC m=+15.250959220" watchObservedRunningTime="2024-02-12 20:25:14.355702057 +0000 UTC m=+15.251586369" Feb 12 20:25:15.533481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount97875984.mount: Deactivated successfully. Feb 12 20:25:17.031268 env[1061]: time="2024-02-12T20:25:17.031125457Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:17.033708 env[1061]: time="2024-02-12T20:25:17.033649208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:17.036945 env[1061]: time="2024-02-12T20:25:17.036894695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:17.038553 env[1061]: time="2024-02-12T20:25:17.038489882Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:25:17.043829 env[1061]: time="2024-02-12T20:25:17.041647454Z" level=info msg="CreateContainer within sandbox \"f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:25:17.043829 env[1061]: time="2024-02-12T20:25:17.042514282Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:25:17.067246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981748331.mount: Deactivated successfully. Feb 12 20:25:17.077676 env[1061]: time="2024-02-12T20:25:17.077595134Z" level=info msg="CreateContainer within sandbox \"f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\"" Feb 12 20:25:17.082179 env[1061]: time="2024-02-12T20:25:17.082107561Z" level=info msg="StartContainer for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\"" Feb 12 20:25:17.132742 systemd[1]: Started cri-containerd-79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425.scope. Feb 12 20:25:17.174805 env[1061]: time="2024-02-12T20:25:17.174747302Z" level=info msg="StartContainer for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" returns successfully" Feb 12 20:25:19.291866 kubelet[1913]: I0212 20:25:19.290891 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-qms4x" podStartSLOduration=4.155528978 podCreationTimestamp="2024-02-12 20:25:12 +0000 UTC" firstStartedPulling="2024-02-12 20:25:13.904039613 +0000 UTC m=+14.799923875" lastFinishedPulling="2024-02-12 20:25:17.039352291 +0000 UTC m=+17.935236604" observedRunningTime="2024-02-12 20:25:17.39063861 +0000 UTC m=+18.286522872" watchObservedRunningTime="2024-02-12 20:25:19.290841707 +0000 UTC m=+20.186725969" Feb 12 20:25:24.792734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912733463.mount: Deactivated successfully. Feb 12 20:25:29.798481 env[1061]: time="2024-02-12T20:25:29.798323879Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:29.802551 env[1061]: time="2024-02-12T20:25:29.802467316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:29.806784 env[1061]: time="2024-02-12T20:25:29.806710840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:29.809116 env[1061]: time="2024-02-12T20:25:29.809038850Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:25:29.817744 env[1061]: time="2024-02-12T20:25:29.817648228Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:25:29.844298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757885705.mount: Deactivated successfully. Feb 12 20:25:29.875981 env[1061]: time="2024-02-12T20:25:29.875913478Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\"" Feb 12 20:25:29.878121 env[1061]: time="2024-02-12T20:25:29.876676911Z" level=info msg="StartContainer for \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\"" Feb 12 20:25:29.903753 systemd[1]: Started cri-containerd-fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9.scope. Feb 12 20:25:29.952147 env[1061]: time="2024-02-12T20:25:29.952052344Z" level=info msg="StartContainer for \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\" returns successfully" Feb 12 20:25:29.958039 systemd[1]: cri-containerd-fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9.scope: Deactivated successfully. Feb 12 20:25:30.343390 env[1061]: time="2024-02-12T20:25:30.343149842Z" level=info msg="shim disconnected" id=fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9 Feb 12 20:25:30.344101 env[1061]: time="2024-02-12T20:25:30.344042197Z" level=warning msg="cleaning up after shim disconnected" id=fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9 namespace=k8s.io Feb 12 20:25:30.344371 env[1061]: time="2024-02-12T20:25:30.344320499Z" level=info msg="cleaning up dead shim" Feb 12 20:25:30.363344 env[1061]: time="2024-02-12T20:25:30.363274101Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2355 runtime=io.containerd.runc.v2\n" Feb 12 20:25:30.422286 env[1061]: time="2024-02-12T20:25:30.420940531Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:25:30.461449 env[1061]: time="2024-02-12T20:25:30.461274432Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\"" Feb 12 20:25:30.463114 env[1061]: time="2024-02-12T20:25:30.463049763Z" level=info msg="StartContainer for \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\"" Feb 12 20:25:30.492141 systemd[1]: Started cri-containerd-e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e.scope. Feb 12 20:25:30.553310 env[1061]: time="2024-02-12T20:25:30.552578384Z" level=info msg="StartContainer for \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\" returns successfully" Feb 12 20:25:30.563024 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:25:30.563385 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:25:30.564302 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:25:30.567429 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:25:30.567932 systemd[1]: cri-containerd-e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e.scope: Deactivated successfully. Feb 12 20:25:30.609822 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:25:30.616347 env[1061]: time="2024-02-12T20:25:30.616276363Z" level=info msg="shim disconnected" id=e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e Feb 12 20:25:30.616509 env[1061]: time="2024-02-12T20:25:30.616348680Z" level=warning msg="cleaning up after shim disconnected" id=e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e namespace=k8s.io Feb 12 20:25:30.616509 env[1061]: time="2024-02-12T20:25:30.616363688Z" level=info msg="cleaning up dead shim" Feb 12 20:25:30.626108 env[1061]: time="2024-02-12T20:25:30.626040320Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2422 runtime=io.containerd.runc.v2\n" Feb 12 20:25:30.836496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9-rootfs.mount: Deactivated successfully. Feb 12 20:25:31.424054 env[1061]: time="2024-02-12T20:25:31.422656798Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:25:31.458294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559226625.mount: Deactivated successfully. Feb 12 20:25:31.460542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177513001.mount: Deactivated successfully. Feb 12 20:25:31.473801 env[1061]: time="2024-02-12T20:25:31.473713809Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\"" Feb 12 20:25:31.479526 env[1061]: time="2024-02-12T20:25:31.479423945Z" level=info msg="StartContainer for \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\"" Feb 12 20:25:31.501789 systemd[1]: Started cri-containerd-d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8.scope. Feb 12 20:25:31.543710 systemd[1]: cri-containerd-d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8.scope: Deactivated successfully. Feb 12 20:25:31.638315 env[1061]: time="2024-02-12T20:25:31.638175241Z" level=info msg="StartContainer for \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\" returns successfully" Feb 12 20:25:31.792305 env[1061]: time="2024-02-12T20:25:31.792032523Z" level=info msg="shim disconnected" id=d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8 Feb 12 20:25:31.792305 env[1061]: time="2024-02-12T20:25:31.792137319Z" level=warning msg="cleaning up after shim disconnected" id=d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8 namespace=k8s.io Feb 12 20:25:31.792305 env[1061]: time="2024-02-12T20:25:31.792169109Z" level=info msg="cleaning up dead shim" Feb 12 20:25:31.811042 env[1061]: time="2024-02-12T20:25:31.810907846Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2481 runtime=io.containerd.runc.v2\n" Feb 12 20:25:31.835148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8-rootfs.mount: Deactivated successfully. Feb 12 20:25:32.427907 env[1061]: time="2024-02-12T20:25:32.427834247Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:25:32.469152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137711736.mount: Deactivated successfully. Feb 12 20:25:32.492816 env[1061]: time="2024-02-12T20:25:32.492694469Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\"" Feb 12 20:25:32.494519 env[1061]: time="2024-02-12T20:25:32.494447048Z" level=info msg="StartContainer for \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\"" Feb 12 20:25:32.531164 systemd[1]: Started cri-containerd-698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120.scope. Feb 12 20:25:32.576635 systemd[1]: cri-containerd-698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120.scope: Deactivated successfully. Feb 12 20:25:32.579628 env[1061]: time="2024-02-12T20:25:32.579456532Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ef1117f_243e_434a_80e9_0d5771b1fe67.slice/cri-containerd-698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120.scope/memory.events\": no such file or directory" Feb 12 20:25:32.584045 env[1061]: time="2024-02-12T20:25:32.583873431Z" level=info msg="StartContainer for \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\" returns successfully" Feb 12 20:25:32.610867 env[1061]: time="2024-02-12T20:25:32.610806435Z" level=info msg="shim disconnected" id=698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120 Feb 12 20:25:32.611060 env[1061]: time="2024-02-12T20:25:32.610868632Z" level=warning msg="cleaning up after shim disconnected" id=698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120 namespace=k8s.io Feb 12 20:25:32.611060 env[1061]: time="2024-02-12T20:25:32.610881817Z" level=info msg="cleaning up dead shim" Feb 12 20:25:32.619936 env[1061]: time="2024-02-12T20:25:32.619885865Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:25:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2538 runtime=io.containerd.runc.v2\n" Feb 12 20:25:32.833470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120-rootfs.mount: Deactivated successfully. Feb 12 20:25:33.435952 env[1061]: time="2024-02-12T20:25:33.435854074Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:25:33.473609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548557020.mount: Deactivated successfully. Feb 12 20:25:33.503434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276907893.mount: Deactivated successfully. Feb 12 20:25:33.586787 env[1061]: time="2024-02-12T20:25:33.586657121Z" level=info msg="CreateContainer within sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\"" Feb 12 20:25:33.588820 env[1061]: time="2024-02-12T20:25:33.588763984Z" level=info msg="StartContainer for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\"" Feb 12 20:25:33.617566 systemd[1]: Started cri-containerd-8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d.scope. Feb 12 20:25:33.688110 env[1061]: time="2024-02-12T20:25:33.688008146Z" level=info msg="StartContainer for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" returns successfully" Feb 12 20:25:33.892413 kubelet[1913]: I0212 20:25:33.891543 1913 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:25:34.011011 kubelet[1913]: I0212 20:25:34.010097 1913 topology_manager.go:215] "Topology Admit Handler" podUID="00271afb-5a47-4daf-a37d-53aa5737065c" podNamespace="kube-system" podName="coredns-5dd5756b68-pwlkp" Feb 12 20:25:34.021363 kubelet[1913]: I0212 20:25:34.021313 1913 topology_manager.go:215] "Topology Admit Handler" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" podNamespace="kube-system" podName="coredns-5dd5756b68-hmkml" Feb 12 20:25:34.023860 systemd[1]: Created slice kubepods-burstable-pod00271afb_5a47_4daf_a37d_53aa5737065c.slice. Feb 12 20:25:34.034072 systemd[1]: Created slice kubepods-burstable-pod63cb73e4_ef80_455f_bdc8_6463b258ef2a.slice. Feb 12 20:25:34.053157 kubelet[1913]: I0212 20:25:34.053066 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00271afb-5a47-4daf-a37d-53aa5737065c-config-volume\") pod \"coredns-5dd5756b68-pwlkp\" (UID: \"00271afb-5a47-4daf-a37d-53aa5737065c\") " pod="kube-system/coredns-5dd5756b68-pwlkp" Feb 12 20:25:34.053476 kubelet[1913]: I0212 20:25:34.053320 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63cb73e4-ef80-455f-bdc8-6463b258ef2a-config-volume\") pod \"coredns-5dd5756b68-hmkml\" (UID: \"63cb73e4-ef80-455f-bdc8-6463b258ef2a\") " pod="kube-system/coredns-5dd5756b68-hmkml" Feb 12 20:25:34.053751 kubelet[1913]: I0212 20:25:34.053690 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6vn\" (UniqueName: \"kubernetes.io/projected/00271afb-5a47-4daf-a37d-53aa5737065c-kube-api-access-9r6vn\") pod \"coredns-5dd5756b68-pwlkp\" (UID: \"00271afb-5a47-4daf-a37d-53aa5737065c\") " pod="kube-system/coredns-5dd5756b68-pwlkp" Feb 12 20:25:34.053823 kubelet[1913]: I0212 20:25:34.053809 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsr6\" (UniqueName: \"kubernetes.io/projected/63cb73e4-ef80-455f-bdc8-6463b258ef2a-kube-api-access-pdsr6\") pod \"coredns-5dd5756b68-hmkml\" (UID: \"63cb73e4-ef80-455f-bdc8-6463b258ef2a\") " pod="kube-system/coredns-5dd5756b68-hmkml" Feb 12 20:25:34.332247 env[1061]: time="2024-02-12T20:25:34.331507271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pwlkp,Uid:00271afb-5a47-4daf-a37d-53aa5737065c,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:34.338417 env[1061]: time="2024-02-12T20:25:34.338341876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hmkml,Uid:63cb73e4-ef80-455f-bdc8-6463b258ef2a,Namespace:kube-system,Attempt:0,}" Feb 12 20:25:36.289785 systemd-networkd[978]: cilium_host: Link UP Feb 12 20:25:36.294619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:25:36.294798 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:25:36.292692 systemd-networkd[978]: cilium_net: Link UP Feb 12 20:25:36.293080 systemd-networkd[978]: cilium_net: Gained carrier Feb 12 20:25:36.293511 systemd-networkd[978]: cilium_host: Gained carrier Feb 12 20:25:36.307716 systemd-networkd[978]: cilium_host: Gained IPv6LL Feb 12 20:25:36.449140 systemd-networkd[978]: cilium_vxlan: Link UP Feb 12 20:25:36.449150 systemd-networkd[978]: cilium_vxlan: Gained carrier Feb 12 20:25:36.767796 systemd-networkd[978]: cilium_net: Gained IPv6LL Feb 12 20:25:37.615658 systemd-networkd[978]: cilium_vxlan: Gained IPv6LL Feb 12 20:25:37.933283 kernel: NET: Registered PF_ALG protocol family Feb 12 20:25:38.907388 systemd-networkd[978]: lxc_health: Link UP Feb 12 20:25:38.929871 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:25:38.929520 systemd-networkd[978]: lxc_health: Gained carrier Feb 12 20:25:39.437394 systemd-networkd[978]: lxc0d31376ce27f: Link UP Feb 12 20:25:39.443249 kernel: eth0: renamed from tmp7ca6d Feb 12 20:25:39.448502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0d31376ce27f: link becomes ready Feb 12 20:25:39.448238 systemd-networkd[978]: lxc0d31376ce27f: Gained carrier Feb 12 20:25:39.505755 systemd-networkd[978]: lxc5816413c4399: Link UP Feb 12 20:25:39.512241 kernel: eth0: renamed from tmp97152 Feb 12 20:25:39.517600 systemd-networkd[978]: lxc5816413c4399: Gained carrier Feb 12 20:25:39.518312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5816413c4399: link becomes ready Feb 12 20:25:39.919989 kubelet[1913]: I0212 20:25:39.919940 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-md9s8" podStartSLOduration=13.168678069 podCreationTimestamp="2024-02-12 20:25:11 +0000 UTC" firstStartedPulling="2024-02-12 20:25:14.058571589 +0000 UTC m=+14.954455871" lastFinishedPulling="2024-02-12 20:25:29.809769862 +0000 UTC m=+30.705654154" observedRunningTime="2024-02-12 20:25:34.5375527 +0000 UTC m=+35.433436973" watchObservedRunningTime="2024-02-12 20:25:39.919876352 +0000 UTC m=+40.815760614" Feb 12 20:25:40.559394 systemd-networkd[978]: lxc5816413c4399: Gained IPv6LL Feb 12 20:25:40.687508 systemd-networkd[978]: lxc_health: Gained IPv6LL Feb 12 20:25:41.228487 systemd-networkd[978]: lxc0d31376ce27f: Gained IPv6LL Feb 12 20:25:44.358702 env[1061]: time="2024-02-12T20:25:44.358522198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:44.358702 env[1061]: time="2024-02-12T20:25:44.358558721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:44.358702 env[1061]: time="2024-02-12T20:25:44.358573679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:44.359153 env[1061]: time="2024-02-12T20:25:44.358720576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ca6dd1627074a9720a973ee692505d2997a1a0d0a4d58eafcdf2bd61df91e3f pid=3082 runtime=io.containerd.runc.v2 Feb 12 20:25:44.378150 systemd[1]: run-containerd-runc-k8s.io-7ca6dd1627074a9720a973ee692505d2997a1a0d0a4d58eafcdf2bd61df91e3f-runc.BIMFkI.mount: Deactivated successfully. Feb 12 20:25:44.382121 systemd[1]: Started cri-containerd-7ca6dd1627074a9720a973ee692505d2997a1a0d0a4d58eafcdf2bd61df91e3f.scope. Feb 12 20:25:44.441898 env[1061]: time="2024-02-12T20:25:44.441829633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pwlkp,Uid:00271afb-5a47-4daf-a37d-53aa5737065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ca6dd1627074a9720a973ee692505d2997a1a0d0a4d58eafcdf2bd61df91e3f\"" Feb 12 20:25:44.447476 env[1061]: time="2024-02-12T20:25:44.447437072Z" level=info msg="CreateContainer within sandbox \"7ca6dd1627074a9720a973ee692505d2997a1a0d0a4d58eafcdf2bd61df91e3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:25:44.492274 env[1061]: time="2024-02-12T20:25:44.492154970Z" level=info msg="CreateContainer within sandbox \"7ca6dd1627074a9720a973ee692505d2997a1a0d0a4d58eafcdf2bd61df91e3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"229f2b1659f14405c86c6cbfd7e517cb800a6f6ac4edef5ebd2d90d2b2dfb5f2\"" Feb 12 20:25:44.494088 env[1061]: time="2024-02-12T20:25:44.493058517Z" level=info msg="StartContainer for \"229f2b1659f14405c86c6cbfd7e517cb800a6f6ac4edef5ebd2d90d2b2dfb5f2\"" Feb 12 20:25:44.523600 systemd[1]: Started cri-containerd-229f2b1659f14405c86c6cbfd7e517cb800a6f6ac4edef5ebd2d90d2b2dfb5f2.scope. Feb 12 20:25:44.530341 env[1061]: time="2024-02-12T20:25:44.529829329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:44.530341 env[1061]: time="2024-02-12T20:25:44.529944731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:44.530341 env[1061]: time="2024-02-12T20:25:44.529963244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:44.530341 env[1061]: time="2024-02-12T20:25:44.530237616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/971522f9d61ea0fe3a1564d69e67f408ab90a137a4976d4fe067dfdefc4a5bf8 pid=3138 runtime=io.containerd.runc.v2 Feb 12 20:25:44.558897 systemd[1]: Started cri-containerd-971522f9d61ea0fe3a1564d69e67f408ab90a137a4976d4fe067dfdefc4a5bf8.scope. Feb 12 20:25:44.573532 env[1061]: time="2024-02-12T20:25:44.573402538Z" level=info msg="StartContainer for \"229f2b1659f14405c86c6cbfd7e517cb800a6f6ac4edef5ebd2d90d2b2dfb5f2\" returns successfully" Feb 12 20:25:44.648121 env[1061]: time="2024-02-12T20:25:44.646839326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hmkml,Uid:63cb73e4-ef80-455f-bdc8-6463b258ef2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"971522f9d61ea0fe3a1564d69e67f408ab90a137a4976d4fe067dfdefc4a5bf8\"" Feb 12 20:25:44.652551 env[1061]: time="2024-02-12T20:25:44.652496663Z" level=info msg="CreateContainer within sandbox \"971522f9d61ea0fe3a1564d69e67f408ab90a137a4976d4fe067dfdefc4a5bf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:25:44.684505 env[1061]: time="2024-02-12T20:25:44.684401628Z" level=info msg="CreateContainer within sandbox \"971522f9d61ea0fe3a1564d69e67f408ab90a137a4976d4fe067dfdefc4a5bf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96ced13686857c44c988f63c16f605e03295befb89a261cf56328579b1cb9144\"" Feb 12 20:25:44.685367 env[1061]: time="2024-02-12T20:25:44.685330159Z" level=info msg="StartContainer for \"96ced13686857c44c988f63c16f605e03295befb89a261cf56328579b1cb9144\"" Feb 12 20:25:44.712091 systemd[1]: Started cri-containerd-96ced13686857c44c988f63c16f605e03295befb89a261cf56328579b1cb9144.scope. Feb 12 20:25:44.770748 env[1061]: time="2024-02-12T20:25:44.770664103Z" level=info msg="StartContainer for \"96ced13686857c44c988f63c16f605e03295befb89a261cf56328579b1cb9144\" returns successfully" Feb 12 20:25:45.376362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790494630.mount: Deactivated successfully. Feb 12 20:25:45.501364 kubelet[1913]: I0212 20:25:45.501307 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pwlkp" podStartSLOduration=33.501176276 podCreationTimestamp="2024-02-12 20:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:45.4956146 +0000 UTC m=+46.391498942" watchObservedRunningTime="2024-02-12 20:25:45.501176276 +0000 UTC m=+46.397060588" Feb 12 20:25:45.536229 kubelet[1913]: I0212 20:25:45.536141 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hmkml" podStartSLOduration=33.536082138 podCreationTimestamp="2024-02-12 20:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:45.534275519 +0000 UTC m=+46.430159791" watchObservedRunningTime="2024-02-12 20:25:45.536082138 +0000 UTC m=+46.431966410" Feb 12 20:25:54.763910 systemd[1]: Started sshd@5-172.24.4.211:22-172.24.4.1:43074.service. Feb 12 20:25:55.935034 sshd[3244]: Accepted publickey for core from 172.24.4.1 port 43074 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:25:55.938668 sshd[3244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:55.953072 systemd[1]: Started session-6.scope. Feb 12 20:25:55.953520 systemd-logind[1050]: New session 6 of user core. Feb 12 20:25:56.958123 sshd[3244]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:56.964031 systemd[1]: sshd@5-172.24.4.211:22-172.24.4.1:43074.service: Deactivated successfully. Feb 12 20:25:56.965466 systemd-logind[1050]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:25:56.965854 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:25:56.968806 systemd-logind[1050]: Removed session 6. Feb 12 20:26:01.972749 systemd[1]: Started sshd@6-172.24.4.211:22-172.24.4.1:43086.service. Feb 12 20:26:03.169002 sshd[3260]: Accepted publickey for core from 172.24.4.1 port 43086 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:03.172834 sshd[3260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:03.190455 systemd[1]: Started session-7.scope. Feb 12 20:26:03.191381 systemd-logind[1050]: New session 7 of user core. Feb 12 20:26:04.108625 sshd[3260]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:04.114690 systemd[1]: sshd@6-172.24.4.211:22-172.24.4.1:43086.service: Deactivated successfully. Feb 12 20:26:04.116500 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:26:04.118184 systemd-logind[1050]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:26:04.120891 systemd-logind[1050]: Removed session 7. Feb 12 20:26:09.120514 systemd[1]: Started sshd@7-172.24.4.211:22-172.24.4.1:33652.service. Feb 12 20:26:10.413404 sshd[3273]: Accepted publickey for core from 172.24.4.1 port 33652 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:10.414603 sshd[3273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:10.429869 systemd[1]: Started session-8.scope. Feb 12 20:26:10.432089 systemd-logind[1050]: New session 8 of user core. Feb 12 20:26:11.410771 sshd[3273]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:11.418829 systemd-logind[1050]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:26:11.419088 systemd[1]: sshd@7-172.24.4.211:22-172.24.4.1:33652.service: Deactivated successfully. Feb 12 20:26:11.421663 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:26:11.424610 systemd-logind[1050]: Removed session 8. Feb 12 20:26:16.421925 systemd[1]: Started sshd@8-172.24.4.211:22-172.24.4.1:50938.service. Feb 12 20:26:17.825836 sshd[3291]: Accepted publickey for core from 172.24.4.1 port 50938 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:17.828554 sshd[3291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:17.840181 systemd-logind[1050]: New session 9 of user core. Feb 12 20:26:17.841870 systemd[1]: Started session-9.scope. Feb 12 20:26:18.623969 sshd[3291]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:18.632532 systemd[1]: sshd@8-172.24.4.211:22-172.24.4.1:50938.service: Deactivated successfully. Feb 12 20:26:18.634136 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:26:18.639726 systemd[1]: Started sshd@9-172.24.4.211:22-172.24.4.1:50946.service. Feb 12 20:26:18.641377 systemd-logind[1050]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:26:18.649336 systemd-logind[1050]: Removed session 9. Feb 12 20:26:20.097582 sshd[3305]: Accepted publickey for core from 172.24.4.1 port 50946 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:20.099449 sshd[3305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:20.111394 systemd-logind[1050]: New session 10 of user core. Feb 12 20:26:20.112361 systemd[1]: Started session-10.scope. Feb 12 20:26:22.068424 sshd[3305]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:22.079825 systemd[1]: Started sshd@10-172.24.4.211:22-172.24.4.1:50954.service. Feb 12 20:26:22.086310 systemd[1]: sshd@9-172.24.4.211:22-172.24.4.1:50946.service: Deactivated successfully. Feb 12 20:26:22.088529 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:26:22.090817 systemd-logind[1050]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:26:22.095043 systemd-logind[1050]: Removed session 10. Feb 12 20:26:23.622768 sshd[3318]: Accepted publickey for core from 172.24.4.1 port 50954 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:23.625784 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:23.645869 systemd-logind[1050]: New session 11 of user core. Feb 12 20:26:23.648240 systemd[1]: Started session-11.scope. Feb 12 20:26:24.749885 sshd[3318]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:24.757744 systemd[1]: sshd@10-172.24.4.211:22-172.24.4.1:50954.service: Deactivated successfully. Feb 12 20:26:24.760076 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:26:24.762403 systemd-logind[1050]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:26:24.765267 systemd-logind[1050]: Removed session 11. Feb 12 20:26:29.765473 systemd[1]: Started sshd@11-172.24.4.211:22-172.24.4.1:55782.service. Feb 12 20:26:31.117528 sshd[3331]: Accepted publickey for core from 172.24.4.1 port 55782 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:31.120594 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:31.131325 systemd-logind[1050]: New session 12 of user core. Feb 12 20:26:31.133078 systemd[1]: Started session-12.scope. Feb 12 20:26:31.886990 sshd[3331]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:31.894434 systemd[1]: Started sshd@12-172.24.4.211:22-172.24.4.1:55788.service. Feb 12 20:26:31.895513 systemd[1]: sshd@11-172.24.4.211:22-172.24.4.1:55782.service: Deactivated successfully. Feb 12 20:26:31.897832 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:26:31.899651 systemd-logind[1050]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:26:31.901747 systemd-logind[1050]: Removed session 12. Feb 12 20:26:33.300376 sshd[3342]: Accepted publickey for core from 172.24.4.1 port 55788 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:33.303277 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:33.315114 systemd-logind[1050]: New session 13 of user core. Feb 12 20:26:33.316522 systemd[1]: Started session-13.scope. Feb 12 20:26:35.661809 sshd[3342]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:35.676612 systemd[1]: Started sshd@13-172.24.4.211:22-172.24.4.1:54900.service. Feb 12 20:26:35.680723 systemd[1]: sshd@12-172.24.4.211:22-172.24.4.1:55788.service: Deactivated successfully. Feb 12 20:26:35.686593 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:26:35.691866 systemd-logind[1050]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:26:35.695413 systemd-logind[1050]: Removed session 13. Feb 12 20:26:36.916132 sshd[3352]: Accepted publickey for core from 172.24.4.1 port 54900 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:36.918777 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:36.931156 systemd-logind[1050]: New session 14 of user core. Feb 12 20:26:36.932275 systemd[1]: Started session-14.scope. Feb 12 20:26:39.025486 sshd[3352]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:39.031631 systemd[1]: sshd@13-172.24.4.211:22-172.24.4.1:54900.service: Deactivated successfully. Feb 12 20:26:39.033140 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:26:39.036041 systemd-logind[1050]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:26:39.042615 systemd[1]: Started sshd@14-172.24.4.211:22-172.24.4.1:54906.service. Feb 12 20:26:39.052102 systemd-logind[1050]: Removed session 14. Feb 12 20:26:40.446620 sshd[3370]: Accepted publickey for core from 172.24.4.1 port 54906 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:40.449489 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:40.463333 systemd[1]: Started session-15.scope. Feb 12 20:26:40.464265 systemd-logind[1050]: New session 15 of user core. Feb 12 20:26:41.776382 sshd[3370]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:41.788907 systemd[1]: Started sshd@15-172.24.4.211:22-172.24.4.1:54910.service. Feb 12 20:26:41.794625 systemd[1]: sshd@14-172.24.4.211:22-172.24.4.1:54906.service: Deactivated successfully. Feb 12 20:26:41.796507 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:26:41.798165 systemd-logind[1050]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:26:41.801769 systemd-logind[1050]: Removed session 15. Feb 12 20:26:43.401529 sshd[3380]: Accepted publickey for core from 172.24.4.1 port 54910 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:43.404776 sshd[3380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:43.420752 systemd-logind[1050]: New session 16 of user core. Feb 12 20:26:43.422126 systemd[1]: Started session-16.scope. Feb 12 20:26:44.159375 sshd[3380]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:44.164864 systemd[1]: sshd@15-172.24.4.211:22-172.24.4.1:54910.service: Deactivated successfully. Feb 12 20:26:44.166309 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:26:44.167695 systemd-logind[1050]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:26:44.170128 systemd-logind[1050]: Removed session 16. Feb 12 20:26:49.166262 systemd[1]: Started sshd@16-172.24.4.211:22-172.24.4.1:56376.service. Feb 12 20:26:50.403850 sshd[3398]: Accepted publickey for core from 172.24.4.1 port 56376 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:50.407131 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:50.418742 systemd[1]: Started session-17.scope. Feb 12 20:26:50.419661 systemd-logind[1050]: New session 17 of user core. Feb 12 20:26:51.158796 sshd[3398]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:51.165451 systemd[1]: sshd@16-172.24.4.211:22-172.24.4.1:56376.service: Deactivated successfully. Feb 12 20:26:51.166970 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:26:51.168784 systemd-logind[1050]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:26:51.171006 systemd-logind[1050]: Removed session 17. Feb 12 20:26:56.172395 systemd[1]: Started sshd@17-172.24.4.211:22-172.24.4.1:54898.service. Feb 12 20:26:57.623723 sshd[3410]: Accepted publickey for core from 172.24.4.1 port 54898 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:26:57.627763 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:57.644099 systemd-logind[1050]: New session 18 of user core. Feb 12 20:26:57.646686 systemd[1]: Started session-18.scope. Feb 12 20:26:58.366044 sshd[3410]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:58.372875 systemd-logind[1050]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:26:58.373087 systemd[1]: sshd@17-172.24.4.211:22-172.24.4.1:54898.service: Deactivated successfully. Feb 12 20:26:58.374870 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:26:58.377091 systemd-logind[1050]: Removed session 18. Feb 12 20:27:03.376534 systemd[1]: Started sshd@18-172.24.4.211:22-172.24.4.1:54904.service. Feb 12 20:27:04.648740 sshd[3424]: Accepted publickey for core from 172.24.4.1 port 54904 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:27:04.654730 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:04.668158 systemd-logind[1050]: New session 19 of user core. Feb 12 20:27:04.669645 systemd[1]: Started session-19.scope. Feb 12 20:27:05.486592 sshd[3424]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:05.493996 systemd[1]: Started sshd@19-172.24.4.211:22-172.24.4.1:59256.service. Feb 12 20:27:05.495115 systemd[1]: sshd@18-172.24.4.211:22-172.24.4.1:54904.service: Deactivated successfully. Feb 12 20:27:05.499955 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:27:05.505385 systemd-logind[1050]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:27:05.509074 systemd-logind[1050]: Removed session 19. Feb 12 20:27:06.904343 sshd[3435]: Accepted publickey for core from 172.24.4.1 port 59256 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:27:06.906929 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:06.917274 systemd-logind[1050]: New session 20 of user core. Feb 12 20:27:06.918096 systemd[1]: Started session-20.scope. Feb 12 20:27:09.283662 systemd[1]: run-containerd-runc-k8s.io-8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d-runc.mJ2vMl.mount: Deactivated successfully. Feb 12 20:27:09.310580 env[1061]: time="2024-02-12T20:27:09.310523381Z" level=info msg="StopContainer for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" with timeout 30 (s)" Feb 12 20:27:09.311397 env[1061]: time="2024-02-12T20:27:09.311343422Z" level=info msg="Stop container \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" with signal terminated" Feb 12 20:27:09.325331 env[1061]: time="2024-02-12T20:27:09.325256693Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:27:09.333323 env[1061]: time="2024-02-12T20:27:09.333286819Z" level=info msg="StopContainer for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" with timeout 2 (s)" Feb 12 20:27:09.333969 env[1061]: time="2024-02-12T20:27:09.333917390Z" level=info msg="Stop container \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" with signal terminated" Feb 12 20:27:09.342740 systemd-networkd[978]: lxc_health: Link DOWN Feb 12 20:27:09.342748 systemd-networkd[978]: lxc_health: Lost carrier Feb 12 20:27:09.346171 systemd[1]: cri-containerd-79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425.scope: Deactivated successfully. Feb 12 20:27:09.391712 systemd[1]: cri-containerd-8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d.scope: Deactivated successfully. Feb 12 20:27:09.391975 systemd[1]: cri-containerd-8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d.scope: Consumed 9.436s CPU time. Feb 12 20:27:09.398009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425-rootfs.mount: Deactivated successfully. Feb 12 20:27:09.408472 env[1061]: time="2024-02-12T20:27:09.408407187Z" level=info msg="shim disconnected" id=79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425 Feb 12 20:27:09.408760 env[1061]: time="2024-02-12T20:27:09.408740296Z" level=warning msg="cleaning up after shim disconnected" id=79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425 namespace=k8s.io Feb 12 20:27:09.408877 env[1061]: time="2024-02-12T20:27:09.408861591Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.419173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d-rootfs.mount: Deactivated successfully. Feb 12 20:27:09.423405 kubelet[1913]: E0212 20:27:09.423316 1913 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:09.426489 env[1061]: time="2024-02-12T20:27:09.426451282Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3496 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.429042 env[1061]: time="2024-02-12T20:27:09.428957592Z" level=info msg="shim disconnected" id=8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d Feb 12 20:27:09.429183 env[1061]: time="2024-02-12T20:27:09.429140811Z" level=warning msg="cleaning up after shim disconnected" id=8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d namespace=k8s.io Feb 12 20:27:09.429183 env[1061]: time="2024-02-12T20:27:09.429159416Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.432680 env[1061]: time="2024-02-12T20:27:09.429495199Z" level=info msg="StopContainer for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" returns successfully" Feb 12 20:27:09.432680 env[1061]: time="2024-02-12T20:27:09.430242185Z" level=info msg="StopPodSandbox for \"f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0\"" Feb 12 20:27:09.432680 env[1061]: time="2024-02-12T20:27:09.430311713Z" level=info msg="Container to stop \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.439013 env[1061]: time="2024-02-12T20:27:09.438815269Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3514 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.441535 systemd[1]: cri-containerd-f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0.scope: Deactivated successfully. Feb 12 20:27:09.449502 env[1061]: time="2024-02-12T20:27:09.449468042Z" level=info msg="StopContainer for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" returns successfully" Feb 12 20:27:09.450027 env[1061]: time="2024-02-12T20:27:09.450004096Z" level=info msg="StopPodSandbox for \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\"" Feb 12 20:27:09.450165 env[1061]: time="2024-02-12T20:27:09.450142162Z" level=info msg="Container to stop \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.450269 env[1061]: time="2024-02-12T20:27:09.450248229Z" level=info msg="Container to stop \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.450367 env[1061]: time="2024-02-12T20:27:09.450346832Z" level=info msg="Container to stop \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.450451 env[1061]: time="2024-02-12T20:27:09.450432300Z" level=info msg="Container to stop \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.450544 env[1061]: time="2024-02-12T20:27:09.450524321Z" level=info msg="Container to stop \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:09.458041 systemd[1]: cri-containerd-6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb.scope: Deactivated successfully. Feb 12 20:27:09.490357 env[1061]: time="2024-02-12T20:27:09.490294159Z" level=info msg="shim disconnected" id=f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0 Feb 12 20:27:09.491237 env[1061]: time="2024-02-12T20:27:09.491195461Z" level=warning msg="cleaning up after shim disconnected" id=f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0 namespace=k8s.io Feb 12 20:27:09.491830 env[1061]: time="2024-02-12T20:27:09.491810653Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.497336 env[1061]: time="2024-02-12T20:27:09.497289998Z" level=info msg="shim disconnected" id=6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb Feb 12 20:27:09.498288 env[1061]: time="2024-02-12T20:27:09.498265487Z" level=warning msg="cleaning up after shim disconnected" id=6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb namespace=k8s.io Feb 12 20:27:09.498388 env[1061]: time="2024-02-12T20:27:09.498372796Z" level=info msg="cleaning up dead shim" Feb 12 20:27:09.508649 env[1061]: time="2024-02-12T20:27:09.508604968Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3565 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.509141 env[1061]: time="2024-02-12T20:27:09.509118110Z" level=info msg="TearDown network for sandbox \"f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0\" successfully" Feb 12 20:27:09.509506 env[1061]: time="2024-02-12T20:27:09.509486353Z" level=info msg="StopPodSandbox for \"f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0\" returns successfully" Feb 12 20:27:09.520758 env[1061]: time="2024-02-12T20:27:09.520709073Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3571 runtime=io.containerd.runc.v2\n" Feb 12 20:27:09.521310 env[1061]: time="2024-02-12T20:27:09.521273730Z" level=info msg="TearDown network for sandbox \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" successfully" Feb 12 20:27:09.521407 env[1061]: time="2024-02-12T20:27:09.521387852Z" level=info msg="StopPodSandbox for \"6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb\" returns successfully" Feb 12 20:27:09.600372 kubelet[1913]: I0212 20:27:09.600319 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-run\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600560 kubelet[1913]: I0212 20:27:09.600393 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg2gt\" (UniqueName: \"kubernetes.io/projected/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-kube-api-access-wg2gt\") pod \"ee8e7d06-fbb8-43fa-a25b-766d0adc8f76\" (UID: \"ee8e7d06-fbb8-43fa-a25b-766d0adc8f76\") " Feb 12 20:27:09.600560 kubelet[1913]: I0212 20:27:09.600432 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef1117f-243e-434a-80e9-0d5771b1fe67-clustermesh-secrets\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600560 kubelet[1913]: I0212 20:27:09.600459 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-xtables-lock\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600560 kubelet[1913]: I0212 20:27:09.600492 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-hubble-tls\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600560 kubelet[1913]: I0212 20:27:09.600518 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-etc-cni-netd\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600560 kubelet[1913]: I0212 20:27:09.600546 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-config-path\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600768 kubelet[1913]: I0212 20:27:09.600580 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-lib-modules\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600768 kubelet[1913]: I0212 20:27:09.600611 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-cgroup\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600768 kubelet[1913]: I0212 20:27:09.600637 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-hostproc\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600768 kubelet[1913]: I0212 20:27:09.600665 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-kernel\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600768 kubelet[1913]: I0212 20:27:09.600693 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-cilium-config-path\") pod \"ee8e7d06-fbb8-43fa-a25b-766d0adc8f76\" (UID: \"ee8e7d06-fbb8-43fa-a25b-766d0adc8f76\") " Feb 12 20:27:09.600768 kubelet[1913]: I0212 20:27:09.600719 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-bpf-maps\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600966 kubelet[1913]: I0212 20:27:09.600741 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cni-path\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600966 kubelet[1913]: I0212 20:27:09.600766 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-net\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.600966 kubelet[1913]: I0212 20:27:09.600809 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbh7k\" (UniqueName: \"kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-kube-api-access-zbh7k\") pod \"7ef1117f-243e-434a-80e9-0d5771b1fe67\" (UID: \"7ef1117f-243e-434a-80e9-0d5771b1fe67\") " Feb 12 20:27:09.602307 kubelet[1913]: I0212 20:27:09.601199 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.602382 kubelet[1913]: I0212 20:27:09.601301 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.602382 kubelet[1913]: I0212 20:27:09.602349 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.602477 kubelet[1913]: I0212 20:27:09.602380 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-hostproc" (OuterVolumeSpecName: "hostproc") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.602477 kubelet[1913]: I0212 20:27:09.602420 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.604849 kubelet[1913]: I0212 20:27:09.604819 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee8e7d06-fbb8-43fa-a25b-766d0adc8f76" (UID: "ee8e7d06-fbb8-43fa-a25b-766d0adc8f76"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:09.604934 kubelet[1913]: I0212 20:27:09.604864 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.604934 kubelet[1913]: I0212 20:27:09.604885 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cni-path" (OuterVolumeSpecName: "cni-path") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.604934 kubelet[1913]: I0212 20:27:09.604903 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.605239 kubelet[1913]: I0212 20:27:09.605186 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.607986 kubelet[1913]: I0212 20:27:09.607958 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:09.608156 kubelet[1913]: I0212 20:27:09.608138 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:09.608469 kubelet[1913]: I0212 20:27:09.608433 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ef1117f-243e-434a-80e9-0d5771b1fe67-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:09.611654 kubelet[1913]: I0212 20:27:09.611573 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-kube-api-access-zbh7k" (OuterVolumeSpecName: "kube-api-access-zbh7k") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "kube-api-access-zbh7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:09.612308 kubelet[1913]: I0212 20:27:09.612280 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-kube-api-access-wg2gt" (OuterVolumeSpecName: "kube-api-access-wg2gt") pod "ee8e7d06-fbb8-43fa-a25b-766d0adc8f76" (UID: "ee8e7d06-fbb8-43fa-a25b-766d0adc8f76"). InnerVolumeSpecName "kube-api-access-wg2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:09.613196 kubelet[1913]: I0212 20:27:09.613136 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7ef1117f-243e-434a-80e9-0d5771b1fe67" (UID: "7ef1117f-243e-434a-80e9-0d5771b1fe67"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:09.702106 kubelet[1913]: I0212 20:27:09.702035 1913 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wg2gt\" (UniqueName: \"kubernetes.io/projected/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-kube-api-access-wg2gt\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702106 kubelet[1913]: I0212 20:27:09.702075 1913 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ef1117f-243e-434a-80e9-0d5771b1fe67-clustermesh-secrets\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702106 kubelet[1913]: I0212 20:27:09.702093 1913 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-xtables-lock\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702106 kubelet[1913]: I0212 20:27:09.702109 1913 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-hubble-tls\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702106 kubelet[1913]: I0212 20:27:09.702123 1913 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-etc-cni-netd\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702106 kubelet[1913]: I0212 20:27:09.702138 1913 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-hostproc\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702157 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-config-path\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702172 1913 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-lib-modules\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702185 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-cgroup\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702221 1913 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-kernel\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702238 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76-cilium-config-path\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702252 1913 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zbh7k\" (UniqueName: \"kubernetes.io/projected/7ef1117f-243e-434a-80e9-0d5771b1fe67-kube-api-access-zbh7k\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.702815 kubelet[1913]: I0212 20:27:09.702265 1913 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-bpf-maps\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.703374 kubelet[1913]: I0212 20:27:09.702277 1913 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cni-path\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.703374 kubelet[1913]: I0212 20:27:09.702292 1913 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-host-proc-sys-net\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.703374 kubelet[1913]: I0212 20:27:09.702306 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ef1117f-243e-434a-80e9-0d5771b1fe67-cilium-run\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:09.801010 kubelet[1913]: I0212 20:27:09.800951 1913 scope.go:117] "RemoveContainer" containerID="8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d" Feb 12 20:27:09.805717 systemd[1]: Removed slice kubepods-burstable-pod7ef1117f_243e_434a_80e9_0d5771b1fe67.slice. Feb 12 20:27:09.805820 systemd[1]: kubepods-burstable-pod7ef1117f_243e_434a_80e9_0d5771b1fe67.slice: Consumed 9.567s CPU time. Feb 12 20:27:09.813164 env[1061]: time="2024-02-12T20:27:09.812303007Z" level=info msg="RemoveContainer for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\"" Feb 12 20:27:09.815692 systemd[1]: Removed slice kubepods-besteffort-podee8e7d06_fbb8_43fa_a25b_766d0adc8f76.slice. Feb 12 20:27:09.822692 env[1061]: time="2024-02-12T20:27:09.821547056Z" level=info msg="RemoveContainer for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" returns successfully" Feb 12 20:27:09.823003 kubelet[1913]: I0212 20:27:09.821844 1913 scope.go:117] "RemoveContainer" containerID="698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120" Feb 12 20:27:09.823929 env[1061]: time="2024-02-12T20:27:09.823881517Z" level=info msg="RemoveContainer for \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\"" Feb 12 20:27:09.828246 env[1061]: time="2024-02-12T20:27:09.827811849Z" level=info msg="RemoveContainer for \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\" returns successfully" Feb 12 20:27:09.828560 kubelet[1913]: I0212 20:27:09.828494 1913 scope.go:117] "RemoveContainer" containerID="d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8" Feb 12 20:27:09.831378 env[1061]: time="2024-02-12T20:27:09.831099607Z" level=info msg="RemoveContainer for \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\"" Feb 12 20:27:09.841889 env[1061]: time="2024-02-12T20:27:09.841806851Z" level=info msg="RemoveContainer for \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\" returns successfully" Feb 12 20:27:09.842880 kubelet[1913]: I0212 20:27:09.842810 1913 scope.go:117] "RemoveContainer" containerID="e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e" Feb 12 20:27:09.862513 env[1061]: time="2024-02-12T20:27:09.855092638Z" level=info msg="RemoveContainer for \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\"" Feb 12 20:27:09.862513 env[1061]: time="2024-02-12T20:27:09.860890663Z" level=info msg="RemoveContainer for \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\" returns successfully" Feb 12 20:27:09.865397 kubelet[1913]: I0212 20:27:09.862925 1913 scope.go:117] "RemoveContainer" containerID="fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9" Feb 12 20:27:09.868673 env[1061]: time="2024-02-12T20:27:09.868624982Z" level=info msg="RemoveContainer for \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\"" Feb 12 20:27:09.874654 env[1061]: time="2024-02-12T20:27:09.874602842Z" level=info msg="RemoveContainer for \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\" returns successfully" Feb 12 20:27:09.875689 kubelet[1913]: I0212 20:27:09.875646 1913 scope.go:117] "RemoveContainer" containerID="8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d" Feb 12 20:27:09.877803 env[1061]: time="2024-02-12T20:27:09.877184541Z" level=error msg="ContainerStatus for \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\": not found" Feb 12 20:27:09.879192 kubelet[1913]: E0212 20:27:09.879144 1913 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\": not found" containerID="8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d" Feb 12 20:27:09.880430 kubelet[1913]: I0212 20:27:09.880372 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d"} err="failed to get container status \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8439983932b45a0e4842b11b31dc88be1775fb3a933cab7ff5b0e585a7b12d5d\": not found" Feb 12 20:27:09.880430 kubelet[1913]: I0212 20:27:09.880427 1913 scope.go:117] "RemoveContainer" containerID="698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120" Feb 12 20:27:09.880799 env[1061]: time="2024-02-12T20:27:09.880724579Z" level=error msg="ContainerStatus for \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\": not found" Feb 12 20:27:09.880925 kubelet[1913]: E0212 20:27:09.880906 1913 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\": not found" containerID="698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120" Feb 12 20:27:09.880973 kubelet[1913]: I0212 20:27:09.880938 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120"} err="failed to get container status \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\": rpc error: code = NotFound desc = an error occurred when try to find container \"698535c4c70e1f63b654a7392731170dd8675a15249ea7de80d34157acc6e120\": not found" Feb 12 20:27:09.880973 kubelet[1913]: I0212 20:27:09.880950 1913 scope.go:117] "RemoveContainer" containerID="d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8" Feb 12 20:27:09.881373 env[1061]: time="2024-02-12T20:27:09.881258269Z" level=error msg="ContainerStatus for \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\": not found" Feb 12 20:27:09.881501 kubelet[1913]: E0212 20:27:09.881468 1913 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\": not found" containerID="d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8" Feb 12 20:27:09.881547 kubelet[1913]: I0212 20:27:09.881528 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8"} err="failed to get container status \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4e9299f24b59108a3c1f3f3afc6a89ea7cfefa73aa0b13a1889569533bc4ec8\": not found" Feb 12 20:27:09.881547 kubelet[1913]: I0212 20:27:09.881546 1913 scope.go:117] "RemoveContainer" containerID="e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e" Feb 12 20:27:09.881830 env[1061]: time="2024-02-12T20:27:09.881772462Z" level=error msg="ContainerStatus for \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\": not found" Feb 12 20:27:09.881982 kubelet[1913]: E0212 20:27:09.881935 1913 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\": not found" containerID="e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e" Feb 12 20:27:09.882031 kubelet[1913]: I0212 20:27:09.881986 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e"} err="failed to get container status \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7db79735ad80120095f598b3bf8cc1de68b982f111d7f7cf9240a9cde8f800e\": not found" Feb 12 20:27:09.882031 kubelet[1913]: I0212 20:27:09.881998 1913 scope.go:117] "RemoveContainer" containerID="fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9" Feb 12 20:27:09.882273 env[1061]: time="2024-02-12T20:27:09.882193935Z" level=error msg="ContainerStatus for \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\": not found" Feb 12 20:27:09.882425 kubelet[1913]: E0212 20:27:09.882378 1913 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\": not found" containerID="fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9" Feb 12 20:27:09.882468 kubelet[1913]: I0212 20:27:09.882437 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9"} err="failed to get container status \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdc425b501e1ec2b07b7c2760228dc33298f9cb1b41806724bb030870da06df9\": not found" Feb 12 20:27:09.882468 kubelet[1913]: I0212 20:27:09.882450 1913 scope.go:117] "RemoveContainer" containerID="79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425" Feb 12 20:27:09.884246 env[1061]: time="2024-02-12T20:27:09.883930195Z" level=info msg="RemoveContainer for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\"" Feb 12 20:27:09.887683 env[1061]: time="2024-02-12T20:27:09.887613208Z" level=info msg="RemoveContainer for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" returns successfully" Feb 12 20:27:09.887938 kubelet[1913]: I0212 20:27:09.887924 1913 scope.go:117] "RemoveContainer" containerID="79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425" Feb 12 20:27:09.888309 env[1061]: time="2024-02-12T20:27:09.888195398Z" level=error msg="ContainerStatus for \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\": not found" Feb 12 20:27:09.888483 kubelet[1913]: E0212 20:27:09.888449 1913 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\": not found" containerID="79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425" Feb 12 20:27:09.888600 kubelet[1913]: I0212 20:27:09.888590 1913 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425"} err="failed to get container status \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\": rpc error: code = NotFound desc = an error occurred when try to find container \"79ec6aca27499a8511b3cf8028f1629a13118401597cddb75dab6966e4eca425\": not found" Feb 12 20:27:10.278859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb-rootfs.mount: Deactivated successfully. Feb 12 20:27:10.279778 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b2396b0ea4b66e9525ceebe388f0980edf52a9dc9360ce38c28987b2591ddcb-shm.mount: Deactivated successfully. Feb 12 20:27:10.280174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0-rootfs.mount: Deactivated successfully. Feb 12 20:27:10.280602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f40b9a38594819ef69149900a6553883f73f9100e87cb0c68dc62a79a61efef0-shm.mount: Deactivated successfully. Feb 12 20:27:10.280993 systemd[1]: var-lib-kubelet-pods-7ef1117f\x2d243e\x2d434a\x2d80e9\x2d0d5771b1fe67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbh7k.mount: Deactivated successfully. Feb 12 20:27:10.281470 systemd[1]: var-lib-kubelet-pods-ee8e7d06\x2dfbb8\x2d43fa\x2da25b\x2d766d0adc8f76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwg2gt.mount: Deactivated successfully. Feb 12 20:27:10.281874 systemd[1]: var-lib-kubelet-pods-7ef1117f\x2d243e\x2d434a\x2d80e9\x2d0d5771b1fe67-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:10.282312 systemd[1]: var-lib-kubelet-pods-7ef1117f\x2d243e\x2d434a\x2d80e9\x2d0d5771b1fe67-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:27:11.282637 kubelet[1913]: I0212 20:27:11.282564 1913 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" path="/var/lib/kubelet/pods/7ef1117f-243e-434a-80e9-0d5771b1fe67/volumes" Feb 12 20:27:11.284145 kubelet[1913]: I0212 20:27:11.284082 1913 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ee8e7d06-fbb8-43fa-a25b-766d0adc8f76" path="/var/lib/kubelet/pods/ee8e7d06-fbb8-43fa-a25b-766d0adc8f76/volumes" Feb 12 20:27:11.328366 sshd[3435]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:11.337126 systemd[1]: Started sshd@20-172.24.4.211:22-172.24.4.1:59264.service. Feb 12 20:27:11.345903 systemd[1]: sshd@19-172.24.4.211:22-172.24.4.1:59256.service: Deactivated successfully. Feb 12 20:27:11.348080 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:27:11.349030 systemd[1]: session-20.scope: Consumed 1.173s CPU time. Feb 12 20:27:11.352339 systemd-logind[1050]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:27:11.356127 systemd-logind[1050]: Removed session 20. Feb 12 20:27:11.929653 kubelet[1913]: I0212 20:27:11.929518 1913 setters.go:552] "Node became not ready" node="ci-3510-3-2-4-c19eb846e8.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-12T20:27:11Z","lastTransitionTime":"2024-02-12T20:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 12 20:27:12.957509 sshd[3597]: Accepted publickey for core from 172.24.4.1 port 59264 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:27:12.959795 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:12.970195 systemd-logind[1050]: New session 21 of user core. Feb 12 20:27:12.972808 systemd[1]: Started session-21.scope. Feb 12 20:27:13.275947 kubelet[1913]: E0212 20:27:13.275693 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmkml" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" Feb 12 20:27:14.273412 kubelet[1913]: I0212 20:27:14.273377 1913 topology_manager.go:215] "Topology Admit Handler" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" podNamespace="kube-system" podName="cilium-7s4nl" Feb 12 20:27:14.273679 kubelet[1913]: E0212 20:27:14.273665 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" containerName="apply-sysctl-overwrites" Feb 12 20:27:14.273802 kubelet[1913]: E0212 20:27:14.273791 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" containerName="mount-bpf-fs" Feb 12 20:27:14.273921 kubelet[1913]: E0212 20:27:14.273909 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" containerName="mount-cgroup" Feb 12 20:27:14.274024 kubelet[1913]: E0212 20:27:14.274014 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" containerName="clean-cilium-state" Feb 12 20:27:14.274117 kubelet[1913]: E0212 20:27:14.274107 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" containerName="cilium-agent" Feb 12 20:27:14.274227 kubelet[1913]: E0212 20:27:14.274198 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee8e7d06-fbb8-43fa-a25b-766d0adc8f76" containerName="cilium-operator" Feb 12 20:27:14.274373 kubelet[1913]: I0212 20:27:14.274332 1913 memory_manager.go:346] "RemoveStaleState removing state" podUID="ee8e7d06-fbb8-43fa-a25b-766d0adc8f76" containerName="cilium-operator" Feb 12 20:27:14.274453 kubelet[1913]: I0212 20:27:14.274443 1913 memory_manager.go:346] "RemoveStaleState removing state" podUID="7ef1117f-243e-434a-80e9-0d5771b1fe67" containerName="cilium-agent" Feb 12 20:27:14.286104 systemd[1]: Created slice kubepods-burstable-podc0aa08a0_3ea6_4250_8142_7ffc9a314103.slice. Feb 12 20:27:14.335523 kubelet[1913]: I0212 20:27:14.335491 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-run\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336020 kubelet[1913]: I0212 20:27:14.336007 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-lib-modules\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336150 kubelet[1913]: I0212 20:27:14.336139 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-kernel\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336295 kubelet[1913]: I0212 20:27:14.336283 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-net\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336417 kubelet[1913]: I0212 20:27:14.336406 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-etc-cni-netd\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336546 kubelet[1913]: I0212 20:27:14.336535 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-config-path\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336693 kubelet[1913]: I0212 20:27:14.336653 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hubble-tls\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336804 kubelet[1913]: I0212 20:27:14.336793 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nx8w\" (UniqueName: \"kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-kube-api-access-9nx8w\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.336921 kubelet[1913]: I0212 20:27:14.336910 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-bpf-maps\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.337037 kubelet[1913]: I0212 20:27:14.337027 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-xtables-lock\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.337197 kubelet[1913]: I0212 20:27:14.337185 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-cgroup\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.337346 kubelet[1913]: I0212 20:27:14.337324 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-clustermesh-secrets\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.337467 kubelet[1913]: I0212 20:27:14.337456 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cni-path\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.337586 kubelet[1913]: I0212 20:27:14.337576 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-ipsec-secrets\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.337696 kubelet[1913]: I0212 20:27:14.337685 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hostproc\") pod \"cilium-7s4nl\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " pod="kube-system/cilium-7s4nl" Feb 12 20:27:14.426975 kubelet[1913]: E0212 20:27:14.426846 1913 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:14.498539 sshd[3597]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:14.510474 systemd[1]: sshd@20-172.24.4.211:22-172.24.4.1:59264.service: Deactivated successfully. Feb 12 20:27:14.511486 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:27:14.512498 systemd-logind[1050]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:27:14.513748 systemd[1]: Started sshd@21-172.24.4.211:22-172.24.4.1:59270.service. Feb 12 20:27:14.516533 systemd-logind[1050]: Removed session 21. Feb 12 20:27:14.593269 env[1061]: time="2024-02-12T20:27:14.593103118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7s4nl,Uid:c0aa08a0-3ea6-4250-8142-7ffc9a314103,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:14.623419 env[1061]: time="2024-02-12T20:27:14.623132497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:14.623797 env[1061]: time="2024-02-12T20:27:14.623350532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:14.624057 env[1061]: time="2024-02-12T20:27:14.623951348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:14.624707 env[1061]: time="2024-02-12T20:27:14.624605872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d pid=3622 runtime=io.containerd.runc.v2 Feb 12 20:27:14.649603 systemd[1]: Started cri-containerd-2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d.scope. Feb 12 20:27:14.685899 env[1061]: time="2024-02-12T20:27:14.685840304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7s4nl,Uid:c0aa08a0-3ea6-4250-8142-7ffc9a314103,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\"" Feb 12 20:27:14.695752 env[1061]: time="2024-02-12T20:27:14.695661263Z" level=info msg="CreateContainer within sandbox \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:14.714331 env[1061]: time="2024-02-12T20:27:14.714237505Z" level=info msg="CreateContainer within sandbox \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\"" Feb 12 20:27:14.716919 env[1061]: time="2024-02-12T20:27:14.716527404Z" level=info msg="StartContainer for \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\"" Feb 12 20:27:14.734478 systemd[1]: Started cri-containerd-f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938.scope. Feb 12 20:27:14.756348 systemd[1]: cri-containerd-f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938.scope: Deactivated successfully. Feb 12 20:27:14.779358 env[1061]: time="2024-02-12T20:27:14.779284333Z" level=info msg="shim disconnected" id=f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938 Feb 12 20:27:14.779616 env[1061]: time="2024-02-12T20:27:14.779373398Z" level=warning msg="cleaning up after shim disconnected" id=f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938 namespace=k8s.io Feb 12 20:27:14.779616 env[1061]: time="2024-02-12T20:27:14.779386984Z" level=info msg="cleaning up dead shim" Feb 12 20:27:14.787718 env[1061]: time="2024-02-12T20:27:14.787658457Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3679 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:27:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:27:14.788463 env[1061]: time="2024-02-12T20:27:14.788337046Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Feb 12 20:27:14.790395 env[1061]: time="2024-02-12T20:27:14.790355853Z" level=error msg="Failed to pipe stderr of container \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\"" error="reading from a closed fifo" Feb 12 20:27:14.792405 env[1061]: time="2024-02-12T20:27:14.792363790Z" level=error msg="Failed to pipe stdout of container \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\"" error="reading from a closed fifo" Feb 12 20:27:14.796316 env[1061]: time="2024-02-12T20:27:14.796257237Z" level=error msg="StartContainer for \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:27:14.796704 kubelet[1913]: E0212 20:27:14.796680 1913 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938" Feb 12 20:27:14.799332 kubelet[1913]: E0212 20:27:14.799289 1913 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:27:14.799332 kubelet[1913]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:27:14.799332 kubelet[1913]: rm /hostbin/cilium-mount Feb 12 20:27:14.799643 kubelet[1913]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9nx8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7s4nl_kube-system(c0aa08a0-3ea6-4250-8142-7ffc9a314103): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:27:14.799643 kubelet[1913]: E0212 20:27:14.799374 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7s4nl" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" Feb 12 20:27:14.847298 env[1061]: time="2024-02-12T20:27:14.844249610Z" level=info msg="CreateContainer within sandbox \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 12 20:27:14.868952 env[1061]: time="2024-02-12T20:27:14.868891259Z" level=info msg="CreateContainer within sandbox \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\"" Feb 12 20:27:14.871364 env[1061]: time="2024-02-12T20:27:14.871327532Z" level=info msg="StartContainer for \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\"" Feb 12 20:27:14.897556 systemd[1]: Started cri-containerd-e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39.scope. Feb 12 20:27:14.911298 systemd[1]: cri-containerd-e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39.scope: Deactivated successfully. Feb 12 20:27:14.922046 env[1061]: time="2024-02-12T20:27:14.921987896Z" level=info msg="shim disconnected" id=e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39 Feb 12 20:27:14.922365 env[1061]: time="2024-02-12T20:27:14.922335161Z" level=warning msg="cleaning up after shim disconnected" id=e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39 namespace=k8s.io Feb 12 20:27:14.922460 env[1061]: time="2024-02-12T20:27:14.922444944Z" level=info msg="cleaning up dead shim" Feb 12 20:27:14.931341 env[1061]: time="2024-02-12T20:27:14.931268643Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3716 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:27:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:27:14.931706 env[1061]: time="2024-02-12T20:27:14.931624464Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Feb 12 20:27:14.932243 env[1061]: time="2024-02-12T20:27:14.932188410Z" level=error msg="Failed to pipe stderr of container \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\"" error="reading from a closed fifo" Feb 12 20:27:14.932562 env[1061]: time="2024-02-12T20:27:14.932325695Z" level=error msg="Failed to pipe stdout of container \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\"" error="reading from a closed fifo" Feb 12 20:27:14.935839 env[1061]: time="2024-02-12T20:27:14.935803601Z" level=error msg="StartContainer for \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:27:14.936303 kubelet[1913]: E0212 20:27:14.936276 1913 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39" Feb 12 20:27:14.936466 kubelet[1913]: E0212 20:27:14.936446 1913 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:27:14.936466 kubelet[1913]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:27:14.936466 kubelet[1913]: rm /hostbin/cilium-mount Feb 12 20:27:14.936466 kubelet[1913]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9nx8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7s4nl_kube-system(c0aa08a0-3ea6-4250-8142-7ffc9a314103): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:27:14.936658 kubelet[1913]: E0212 20:27:14.936523 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7s4nl" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" Feb 12 20:27:15.275547 kubelet[1913]: E0212 20:27:15.275315 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmkml" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" Feb 12 20:27:15.843780 kubelet[1913]: I0212 20:27:15.843695 1913 scope.go:117] "RemoveContainer" containerID="f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938" Feb 12 20:27:15.844976 kubelet[1913]: I0212 20:27:15.844550 1913 scope.go:117] "RemoveContainer" containerID="f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938" Feb 12 20:27:15.852628 env[1061]: time="2024-02-12T20:27:15.852506256Z" level=info msg="RemoveContainer for \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\"" Feb 12 20:27:15.853833 env[1061]: time="2024-02-12T20:27:15.853774770Z" level=info msg="RemoveContainer for \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\"" Feb 12 20:27:15.854043 env[1061]: time="2024-02-12T20:27:15.853944215Z" level=error msg="RemoveContainer for \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\" failed" error="failed to set removing state for container \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\": container is already in removing state" Feb 12 20:27:15.855046 kubelet[1913]: E0212 20:27:15.854900 1913 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\": container is already in removing state" containerID="f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938" Feb 12 20:27:15.860613 kubelet[1913]: E0212 20:27:15.860494 1913 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938": container is already in removing state; Skipping pod "cilium-7s4nl_kube-system(c0aa08a0-3ea6-4250-8142-7ffc9a314103)" Feb 12 20:27:15.864184 env[1061]: time="2024-02-12T20:27:15.864089108Z" level=info msg="RemoveContainer for \"f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938\" returns successfully" Feb 12 20:27:15.865503 kubelet[1913]: E0212 20:27:15.865465 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7s4nl_kube-system(c0aa08a0-3ea6-4250-8142-7ffc9a314103)\"" pod="kube-system/cilium-7s4nl" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" Feb 12 20:27:15.933131 sshd[3613]: Accepted publickey for core from 172.24.4.1 port 59270 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:27:15.936288 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:15.947690 systemd-logind[1050]: New session 22 of user core. Feb 12 20:27:15.950309 systemd[1]: Started session-22.scope. Feb 12 20:27:16.741809 sshd[3613]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:16.746737 systemd[1]: Started sshd@22-172.24.4.211:22-172.24.4.1:43840.service. Feb 12 20:27:16.747356 systemd[1]: sshd@21-172.24.4.211:22-172.24.4.1:59270.service: Deactivated successfully. Feb 12 20:27:16.748147 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:27:16.759797 systemd-logind[1050]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:27:16.763655 systemd-logind[1050]: Removed session 22. Feb 12 20:27:16.849587 env[1061]: time="2024-02-12T20:27:16.849303684Z" level=info msg="StopPodSandbox for \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\"" Feb 12 20:27:16.850115 env[1061]: time="2024-02-12T20:27:16.850065649Z" level=info msg="Container to stop \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:27:16.855468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d-shm.mount: Deactivated successfully. Feb 12 20:27:16.873890 systemd[1]: cri-containerd-2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d.scope: Deactivated successfully. Feb 12 20:27:16.926036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d-rootfs.mount: Deactivated successfully. Feb 12 20:27:16.938728 env[1061]: time="2024-02-12T20:27:16.938632047Z" level=info msg="shim disconnected" id=2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d Feb 12 20:27:16.940197 env[1061]: time="2024-02-12T20:27:16.940122424Z" level=warning msg="cleaning up after shim disconnected" id=2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d namespace=k8s.io Feb 12 20:27:16.940445 env[1061]: time="2024-02-12T20:27:16.940404147Z" level=info msg="cleaning up dead shim" Feb 12 20:27:16.957900 env[1061]: time="2024-02-12T20:27:16.957820391Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3761 runtime=io.containerd.runc.v2\n" Feb 12 20:27:16.958885 env[1061]: time="2024-02-12T20:27:16.958826198Z" level=info msg="TearDown network for sandbox \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\" successfully" Feb 12 20:27:16.959129 env[1061]: time="2024-02-12T20:27:16.959055505Z" level=info msg="StopPodSandbox for \"2f6554f29c7d592a93a6dc5daf6c6b1bcec00e05274309a2933b81520e2eda2d\" returns successfully" Feb 12 20:27:17.064337 kubelet[1913]: I0212 20:27:17.063980 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.064337 kubelet[1913]: I0212 20:27:17.064295 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-run\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.066912 kubelet[1913]: I0212 20:27:17.066866 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.066975 kubelet[1913]: I0212 20:27:17.066802 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-net\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.067730 kubelet[1913]: I0212 20:27:17.067696 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nx8w\" (UniqueName: \"kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-kube-api-access-9nx8w\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.067817 kubelet[1913]: I0212 20:27:17.067792 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-config-path\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.067864 kubelet[1913]: I0212 20:27:17.067855 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hostproc\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.067936 kubelet[1913]: I0212 20:27:17.067908 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-etc-cni-netd\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.067981 kubelet[1913]: I0212 20:27:17.067973 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-kernel\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068048 kubelet[1913]: I0212 20:27:17.068024 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cni-path\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068124 kubelet[1913]: I0212 20:27:17.068095 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-ipsec-secrets\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068290 kubelet[1913]: I0212 20:27:17.068261 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-clustermesh-secrets\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068368 kubelet[1913]: I0212 20:27:17.068341 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hubble-tls\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068418 kubelet[1913]: I0212 20:27:17.068403 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-bpf-maps\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068464 kubelet[1913]: I0212 20:27:17.068457 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-xtables-lock\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068533 kubelet[1913]: I0212 20:27:17.068507 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-cgroup\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068643 kubelet[1913]: I0212 20:27:17.068622 1913 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-lib-modules\") pod \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\" (UID: \"c0aa08a0-3ea6-4250-8142-7ffc9a314103\") " Feb 12 20:27:17.068741 kubelet[1913]: I0212 20:27:17.068717 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-run\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.068791 kubelet[1913]: I0212 20:27:17.068773 1913 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-net\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.068865 kubelet[1913]: I0212 20:27:17.068829 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.072484 systemd[1]: var-lib-kubelet-pods-c0aa08a0\x2d3ea6\x2d4250\x2d8142\x2d7ffc9a314103-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9nx8w.mount: Deactivated successfully. Feb 12 20:27:17.075503 kubelet[1913]: I0212 20:27:17.075444 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-kube-api-access-9nx8w" (OuterVolumeSpecName: "kube-api-access-9nx8w") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "kube-api-access-9nx8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:17.076580 kubelet[1913]: I0212 20:27:17.076547 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:27:17.082078 systemd[1]: var-lib-kubelet-pods-c0aa08a0\x2d3ea6\x2d4250\x2d8142\x2d7ffc9a314103-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:17.083698 kubelet[1913]: I0212 20:27:17.083663 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:17.086066 systemd[1]: var-lib-kubelet-pods-c0aa08a0\x2d3ea6\x2d4250\x2d8142\x2d7ffc9a314103-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:27:17.088998 kubelet[1913]: I0212 20:27:17.088967 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:27:17.089145 kubelet[1913]: I0212 20:27:17.089130 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.089265 kubelet[1913]: I0212 20:27:17.089249 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.089358 kubelet[1913]: I0212 20:27:17.089343 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.089442 kubelet[1913]: I0212 20:27:17.089428 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.089545 kubelet[1913]: I0212 20:27:17.089530 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hostproc" (OuterVolumeSpecName: "hostproc") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.089617 kubelet[1913]: I0212 20:27:17.089536 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:27:17.089684 kubelet[1913]: I0212 20:27:17.089602 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.089785 kubelet[1913]: I0212 20:27:17.089637 1913 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cni-path" (OuterVolumeSpecName: "cni-path") pod "c0aa08a0-3ea6-4250-8142-7ffc9a314103" (UID: "c0aa08a0-3ea6-4250-8142-7ffc9a314103"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:27:17.169395 kubelet[1913]: I0212 20:27:17.169348 1913 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-clustermesh-secrets\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.169664 kubelet[1913]: I0212 20:27:17.169652 1913 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hubble-tls\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.169749 kubelet[1913]: I0212 20:27:17.169735 1913 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-xtables-lock\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.169826 kubelet[1913]: I0212 20:27:17.169817 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-cgroup\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.169902 kubelet[1913]: I0212 20:27:17.169891 1913 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-lib-modules\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.169974 kubelet[1913]: I0212 20:27:17.169965 1913 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-bpf-maps\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170053 kubelet[1913]: I0212 20:27:17.170043 1913 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9nx8w\" (UniqueName: \"kubernetes.io/projected/c0aa08a0-3ea6-4250-8142-7ffc9a314103-kube-api-access-9nx8w\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170128 kubelet[1913]: I0212 20:27:17.170118 1913 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-etc-cni-netd\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170238 kubelet[1913]: I0212 20:27:17.170191 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-config-path\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170328 kubelet[1913]: I0212 20:27:17.170318 1913 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-hostproc\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170401 kubelet[1913]: I0212 20:27:17.170392 1913 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cni-path\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170475 kubelet[1913]: I0212 20:27:17.170465 1913 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c0aa08a0-3ea6-4250-8142-7ffc9a314103-cilium-ipsec-secrets\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.170548 kubelet[1913]: I0212 20:27:17.170539 1913 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0aa08a0-3ea6-4250-8142-7ffc9a314103-host-proc-sys-kernel\") on node \"ci-3510-3-2-4-c19eb846e8.novalocal\" DevicePath \"\"" Feb 12 20:27:17.275910 kubelet[1913]: E0212 20:27:17.275820 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmkml" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" Feb 12 20:27:17.293244 systemd[1]: Removed slice kubepods-burstable-podc0aa08a0_3ea6_4250_8142_7ffc9a314103.slice. Feb 12 20:27:17.856898 systemd[1]: var-lib-kubelet-pods-c0aa08a0\x2d3ea6\x2d4250\x2d8142\x2d7ffc9a314103-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:27:17.858871 kubelet[1913]: I0212 20:27:17.858712 1913 scope.go:117] "RemoveContainer" containerID="e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39" Feb 12 20:27:17.862068 env[1061]: time="2024-02-12T20:27:17.862006944Z" level=info msg="RemoveContainer for \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\"" Feb 12 20:27:17.874593 env[1061]: time="2024-02-12T20:27:17.874520158Z" level=info msg="RemoveContainer for \"e8cca9f030bfe593b3edf883fe0347a06d894c98b6bb4187d75de1b0c26e2d39\" returns successfully" Feb 12 20:27:17.892889 kubelet[1913]: W0212 20:27:17.892817 1913 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0aa08a0_3ea6_4250_8142_7ffc9a314103.slice/cri-containerd-f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938.scope WatchSource:0}: container "f72464786268fb16426ba7f519df704b5dee875779f77a04a39ece3b07152938" in namespace "k8s.io": not found Feb 12 20:27:17.951133 kubelet[1913]: I0212 20:27:17.951081 1913 topology_manager.go:215] "Topology Admit Handler" podUID="262ad019-84e7-47b5-b1f4-5ce4f077c650" podNamespace="kube-system" podName="cilium-b2khb" Feb 12 20:27:17.951538 kubelet[1913]: E0212 20:27:17.951486 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" containerName="mount-cgroup" Feb 12 20:27:17.951754 kubelet[1913]: E0212 20:27:17.951730 1913 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" containerName="mount-cgroup" Feb 12 20:27:17.951975 kubelet[1913]: I0212 20:27:17.951947 1913 memory_manager.go:346] "RemoveStaleState removing state" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" containerName="mount-cgroup" Feb 12 20:27:17.952282 kubelet[1913]: I0212 20:27:17.952188 1913 memory_manager.go:346] "RemoveStaleState removing state" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" containerName="mount-cgroup" Feb 12 20:27:17.961502 systemd[1]: Created slice kubepods-burstable-pod262ad019_84e7_47b5_b1f4_5ce4f077c650.slice. Feb 12 20:27:18.077882 kubelet[1913]: I0212 20:27:18.077826 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mjk\" (UniqueName: \"kubernetes.io/projected/262ad019-84e7-47b5-b1f4-5ce4f077c650-kube-api-access-g6mjk\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.078800 kubelet[1913]: I0212 20:27:18.078767 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-hostproc\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.079063 kubelet[1913]: I0212 20:27:18.079033 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-cilium-cgroup\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.079352 kubelet[1913]: I0212 20:27:18.079324 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-cilium-run\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.079599 kubelet[1913]: I0212 20:27:18.079572 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-bpf-maps\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.079835 kubelet[1913]: I0212 20:27:18.079810 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/262ad019-84e7-47b5-b1f4-5ce4f077c650-cilium-ipsec-secrets\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.080166 kubelet[1913]: I0212 20:27:18.080140 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-host-proc-sys-net\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.080517 kubelet[1913]: I0212 20:27:18.080482 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-cni-path\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.080808 kubelet[1913]: I0212 20:27:18.080780 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-etc-cni-netd\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.081064 kubelet[1913]: I0212 20:27:18.081037 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-xtables-lock\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.081363 kubelet[1913]: I0212 20:27:18.081329 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-lib-modules\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.081631 kubelet[1913]: I0212 20:27:18.081604 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/262ad019-84e7-47b5-b1f4-5ce4f077c650-clustermesh-secrets\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.081877 kubelet[1913]: I0212 20:27:18.081851 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/262ad019-84e7-47b5-b1f4-5ce4f077c650-hubble-tls\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.082134 kubelet[1913]: I0212 20:27:18.082108 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/262ad019-84e7-47b5-b1f4-5ce4f077c650-cilium-config-path\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.082432 kubelet[1913]: I0212 20:27:18.082405 1913 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/262ad019-84e7-47b5-b1f4-5ce4f077c650-host-proc-sys-kernel\") pod \"cilium-b2khb\" (UID: \"262ad019-84e7-47b5-b1f4-5ce4f077c650\") " pod="kube-system/cilium-b2khb" Feb 12 20:27:18.270334 env[1061]: time="2024-02-12T20:27:18.270071827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b2khb,Uid:262ad019-84e7-47b5-b1f4-5ce4f077c650,Namespace:kube-system,Attempt:0,}" Feb 12 20:27:18.310271 sshd[3741]: Accepted publickey for core from 172.24.4.1 port 43840 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:27:18.312935 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:27:18.323994 systemd-logind[1050]: New session 23 of user core. Feb 12 20:27:18.325149 systemd[1]: Started session-23.scope. Feb 12 20:27:18.432626 env[1061]: time="2024-02-12T20:27:18.428054956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:27:18.432626 env[1061]: time="2024-02-12T20:27:18.428233978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:27:18.432626 env[1061]: time="2024-02-12T20:27:18.428273121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:27:18.432626 env[1061]: time="2024-02-12T20:27:18.428658376Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0 pid=3792 runtime=io.containerd.runc.v2 Feb 12 20:27:18.466919 systemd[1]: Started cri-containerd-abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0.scope. Feb 12 20:27:18.517966 env[1061]: time="2024-02-12T20:27:18.517848664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b2khb,Uid:262ad019-84e7-47b5-b1f4-5ce4f077c650,Namespace:kube-system,Attempt:0,} returns sandbox id \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\"" Feb 12 20:27:18.523739 env[1061]: time="2024-02-12T20:27:18.523039293Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:27:18.541449 env[1061]: time="2024-02-12T20:27:18.541364891Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0\"" Feb 12 20:27:18.542567 env[1061]: time="2024-02-12T20:27:18.542533101Z" level=info msg="StartContainer for \"85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0\"" Feb 12 20:27:18.560156 systemd[1]: Started cri-containerd-85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0.scope. Feb 12 20:27:18.595420 env[1061]: time="2024-02-12T20:27:18.595360706Z" level=info msg="StartContainer for \"85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0\" returns successfully" Feb 12 20:27:18.605891 systemd[1]: cri-containerd-85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0.scope: Deactivated successfully. Feb 12 20:27:18.658446 env[1061]: time="2024-02-12T20:27:18.658369086Z" level=info msg="shim disconnected" id=85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0 Feb 12 20:27:18.658446 env[1061]: time="2024-02-12T20:27:18.658434408Z" level=warning msg="cleaning up after shim disconnected" id=85cd978e8d35c5fc070e2d37815d76e1e939f4523ea44797f5836f8f01909be0 namespace=k8s.io Feb 12 20:27:18.658446 env[1061]: time="2024-02-12T20:27:18.658447212Z" level=info msg="cleaning up dead shim" Feb 12 20:27:18.668417 env[1061]: time="2024-02-12T20:27:18.668357444Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3876 runtime=io.containerd.runc.v2\n" Feb 12 20:27:18.868788 env[1061]: time="2024-02-12T20:27:18.868645880Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:27:18.893170 env[1061]: time="2024-02-12T20:27:18.893091804Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11\"" Feb 12 20:27:18.894224 env[1061]: time="2024-02-12T20:27:18.894181667Z" level=info msg="StartContainer for \"b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11\"" Feb 12 20:27:18.924592 systemd[1]: Started cri-containerd-b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11.scope. Feb 12 20:27:18.980431 systemd[1]: cri-containerd-b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11.scope: Deactivated successfully. Feb 12 20:27:18.997224 env[1061]: time="2024-02-12T20:27:18.997086472Z" level=info msg="StartContainer for \"b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11\" returns successfully" Feb 12 20:27:19.041860 env[1061]: time="2024-02-12T20:27:19.041796010Z" level=info msg="shim disconnected" id=b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11 Feb 12 20:27:19.042276 env[1061]: time="2024-02-12T20:27:19.042253589Z" level=warning msg="cleaning up after shim disconnected" id=b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11 namespace=k8s.io Feb 12 20:27:19.042375 env[1061]: time="2024-02-12T20:27:19.042357844Z" level=info msg="cleaning up dead shim" Feb 12 20:27:19.052717 env[1061]: time="2024-02-12T20:27:19.052660696Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3943 runtime=io.containerd.runc.v2\n" Feb 12 20:27:19.276399 kubelet[1913]: E0212 20:27:19.275688 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmkml" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" Feb 12 20:27:19.279801 kubelet[1913]: I0212 20:27:19.279752 1913 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c0aa08a0-3ea6-4250-8142-7ffc9a314103" path="/var/lib/kubelet/pods/c0aa08a0-3ea6-4250-8142-7ffc9a314103/volumes" Feb 12 20:27:19.428942 kubelet[1913]: E0212 20:27:19.428889 1913 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:27:19.857499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b696e4db24b860be582ac19294e099ccea87affe725bfebc90a628e248921f11-rootfs.mount: Deactivated successfully. Feb 12 20:27:19.889286 env[1061]: time="2024-02-12T20:27:19.888993005Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:27:19.940983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount54460003.mount: Deactivated successfully. Feb 12 20:27:19.953836 env[1061]: time="2024-02-12T20:27:19.953699239Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263\"" Feb 12 20:27:19.957226 env[1061]: time="2024-02-12T20:27:19.957081721Z" level=info msg="StartContainer for \"b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263\"" Feb 12 20:27:19.982418 systemd[1]: Started cri-containerd-b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263.scope. Feb 12 20:27:20.051385 env[1061]: time="2024-02-12T20:27:20.051333010Z" level=info msg="StartContainer for \"b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263\" returns successfully" Feb 12 20:27:20.053858 systemd[1]: cri-containerd-b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263.scope: Deactivated successfully. Feb 12 20:27:20.083950 env[1061]: time="2024-02-12T20:27:20.083875649Z" level=info msg="shim disconnected" id=b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263 Feb 12 20:27:20.083950 env[1061]: time="2024-02-12T20:27:20.083950058Z" level=warning msg="cleaning up after shim disconnected" id=b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263 namespace=k8s.io Feb 12 20:27:20.083950 env[1061]: time="2024-02-12T20:27:20.083962290Z" level=info msg="cleaning up dead shim" Feb 12 20:27:20.093573 env[1061]: time="2024-02-12T20:27:20.093516695Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3999 runtime=io.containerd.runc.v2\n" Feb 12 20:27:20.857349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b89c987d7bf211b8848ab030b345a1f3ad9628b291ad33d74a5cf76c37dac263-rootfs.mount: Deactivated successfully. Feb 12 20:27:20.900258 env[1061]: time="2024-02-12T20:27:20.897394272Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:27:20.933396 env[1061]: time="2024-02-12T20:27:20.933350511Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a\"" Feb 12 20:27:20.934131 env[1061]: time="2024-02-12T20:27:20.934110402Z" level=info msg="StartContainer for \"048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a\"" Feb 12 20:27:20.970475 systemd[1]: Started cri-containerd-048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a.scope. Feb 12 20:27:21.014786 systemd[1]: cri-containerd-048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a.scope: Deactivated successfully. Feb 12 20:27:21.020527 env[1061]: time="2024-02-12T20:27:21.020406858Z" level=info msg="StartContainer for \"048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a\" returns successfully" Feb 12 20:27:21.049383 env[1061]: time="2024-02-12T20:27:21.049319540Z" level=info msg="shim disconnected" id=048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a Feb 12 20:27:21.049775 env[1061]: time="2024-02-12T20:27:21.049754048Z" level=warning msg="cleaning up after shim disconnected" id=048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a namespace=k8s.io Feb 12 20:27:21.049891 env[1061]: time="2024-02-12T20:27:21.049873429Z" level=info msg="cleaning up dead shim" Feb 12 20:27:21.059094 env[1061]: time="2024-02-12T20:27:21.059054051Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:27:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4055 runtime=io.containerd.runc.v2\n" Feb 12 20:27:21.276994 kubelet[1913]: E0212 20:27:21.276766 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmkml" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" Feb 12 20:27:21.857669 systemd[1]: run-containerd-runc-k8s.io-048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a-runc.YfmpHX.mount: Deactivated successfully. Feb 12 20:27:21.858356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-048ec8f253a2657d3bd912690c098b8b5e73d93413f559b8fc98e9abd7d5da5a-rootfs.mount: Deactivated successfully. Feb 12 20:27:21.905816 env[1061]: time="2024-02-12T20:27:21.905693320Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:27:21.943736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170614428.mount: Deactivated successfully. Feb 12 20:27:21.961835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747825857.mount: Deactivated successfully. Feb 12 20:27:21.979146 env[1061]: time="2024-02-12T20:27:21.979059971Z" level=info msg="CreateContainer within sandbox \"abb9930d6bb5f9df80b7069aff8efb3286965d5184b97ba87ab18858b56f7cd0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c\"" Feb 12 20:27:21.983405 env[1061]: time="2024-02-12T20:27:21.980865704Z" level=info msg="StartContainer for \"0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c\"" Feb 12 20:27:22.003017 systemd[1]: Started cri-containerd-0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c.scope. Feb 12 20:27:22.057407 env[1061]: time="2024-02-12T20:27:22.057319447Z" level=info msg="StartContainer for \"0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c\" returns successfully" Feb 12 20:27:22.925633 kubelet[1913]: I0212 20:27:22.925589 1913 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-b2khb" podStartSLOduration=5.925549401 podCreationTimestamp="2024-02-12 20:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:27:22.924230491 +0000 UTC m=+143.820114773" watchObservedRunningTime="2024-02-12 20:27:22.925549401 +0000 UTC m=+143.821433663" Feb 12 20:27:22.935429 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:27:22.978286 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 12 20:27:23.265627 systemd[1]: run-containerd-runc-k8s.io-0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c-runc.XJiPwj.mount: Deactivated successfully. Feb 12 20:27:23.275694 kubelet[1913]: E0212 20:27:23.275419 1913 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmkml" podUID="63cb73e4-ef80-455f-bdc8-6463b258ef2a" Feb 12 20:27:25.491038 systemd[1]: run-containerd-runc-k8s.io-0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c-runc.ZifVDK.mount: Deactivated successfully. Feb 12 20:27:26.042715 systemd-networkd[978]: lxc_health: Link UP Feb 12 20:27:26.050845 systemd-networkd[978]: lxc_health: Gained carrier Feb 12 20:27:26.053317 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:27:27.311700 systemd-networkd[978]: lxc_health: Gained IPv6LL Feb 12 20:27:27.708351 systemd[1]: run-containerd-runc-k8s.io-0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c-runc.d2b6Tc.mount: Deactivated successfully. Feb 12 20:27:29.906796 systemd[1]: run-containerd-runc-k8s.io-0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c-runc.OUk2NO.mount: Deactivated successfully. Feb 12 20:27:32.110523 systemd[1]: run-containerd-runc-k8s.io-0674e29310ec2c60f9ab51388e14fe42b73f7c50c51a056b589d95f228e4b62c-runc.d6elzN.mount: Deactivated successfully. Feb 12 20:27:32.512404 sshd[3741]: pam_unix(sshd:session): session closed for user core Feb 12 20:27:32.526013 systemd[1]: sshd@22-172.24.4.211:22-172.24.4.1:43840.service: Deactivated successfully. Feb 12 20:27:32.527802 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:27:32.528412 systemd-logind[1050]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:27:32.532735 systemd-logind[1050]: Removed session 23.