Feb 12 19:40:53.081771 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 12 19:40:53.081804 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:40:53.081823 kernel: BIOS-provided physical RAM map: Feb 12 19:40:53.081833 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 19:40:53.081844 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 19:40:53.081855 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 19:40:53.081867 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 12 19:40:53.081877 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 12 19:40:53.081890 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 19:40:53.081901 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 19:40:53.081911 kernel: NX (Execute Disable) protection: active Feb 12 19:40:53.081922 kernel: SMBIOS 2.8 present. Feb 12 19:40:53.081933 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 12 19:40:53.081945 kernel: Hypervisor detected: KVM Feb 12 19:40:53.081959 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:40:53.081974 kernel: kvm-clock: cpu 0, msr 42faa001, primary cpu clock Feb 12 19:40:53.089110 kernel: kvm-clock: using sched offset of 4021355338 cycles Feb 12 19:40:53.089124 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:40:53.089135 kernel: tsc: Detected 2494.138 MHz processor Feb 12 19:40:53.089146 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:40:53.089158 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:40:53.089169 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 12 19:40:53.089187 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:40:53.089208 kernel: ACPI: Early table checksum verification disabled Feb 12 19:40:53.089219 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 12 19:40:53.089230 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089256 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089266 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089276 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 19:40:53.089327 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089335 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089342 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089354 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:40:53.089362 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 12 19:40:53.089369 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 12 19:40:53.089376 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 19:40:53.089383 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 12 19:40:53.089391 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 12 19:40:53.089400 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 12 19:40:53.089412 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 12 19:40:53.089427 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:40:53.089435 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:40:53.089443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 19:40:53.089451 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 19:40:53.089459 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 12 19:40:53.089466 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 12 19:40:53.089477 kernel: Zone ranges: Feb 12 19:40:53.089489 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:40:53.089497 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 12 19:40:53.089550 kernel: Normal empty Feb 12 19:40:53.089558 kernel: Movable zone start for each node Feb 12 19:40:53.089565 kernel: Early memory node ranges Feb 12 19:40:53.089573 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 19:40:53.089581 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 12 19:40:53.089588 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 12 19:40:53.089599 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:40:53.089607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 19:40:53.089615 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 12 19:40:53.089623 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 19:40:53.089631 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:40:53.089639 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:40:53.089651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:40:53.089662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:40:53.089670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:40:53.089688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:40:53.089701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:40:53.089712 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:40:53.089720 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:40:53.089727 kernel: TSC deadline timer available Feb 12 19:40:53.089735 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:40:53.089743 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 19:40:53.089754 kernel: Booting paravirtualized kernel on KVM Feb 12 19:40:53.089763 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:40:53.089777 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:40:53.089785 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:40:53.089792 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:40:53.089800 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:40:53.089811 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 19:40:53.089819 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 19:40:53.089826 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 12 19:40:53.089838 kernel: Policy zone: DMA32 Feb 12 19:40:53.089850 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:40:53.089861 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:40:53.089869 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:40:53.089877 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:40:53.089884 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:40:53.089893 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 19:40:53.089901 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:40:53.089908 kernel: Kernel/User page tables isolation: enabled Feb 12 19:40:53.089916 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:40:53.089926 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:40:53.089934 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:40:53.089942 kernel: rcu: RCU event tracing is enabled. Feb 12 19:40:53.089950 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:40:53.089958 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:40:53.089966 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:40:53.089974 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:40:53.090002 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:40:53.090010 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 19:40:53.090021 kernel: random: crng init done Feb 12 19:40:53.090028 kernel: Console: colour VGA+ 80x25 Feb 12 19:40:53.090040 kernel: printk: console [tty0] enabled Feb 12 19:40:53.090048 kernel: printk: console [ttyS0] enabled Feb 12 19:40:53.090059 kernel: ACPI: Core revision 20210730 Feb 12 19:40:53.090070 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:40:53.090081 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:40:53.090089 kernel: x2apic enabled Feb 12 19:40:53.090097 kernel: Switched APIC routing to physical x2apic. Feb 12 19:40:53.090107 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:40:53.090115 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 12 19:40:53.090123 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Feb 12 19:40:53.090131 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 19:40:53.090139 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 19:40:53.090147 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:40:53.090154 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:40:53.090162 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:40:53.090174 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:40:53.090184 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 12 19:40:53.090201 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:40:53.090209 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:40:53.090232 kernel: MDS: Mitigation: Clear CPU buffers Feb 12 19:40:53.090245 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:40:53.090260 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:40:53.090271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:40:53.090279 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:40:53.090288 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:40:53.090315 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:40:53.090327 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:40:53.090335 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:40:53.090344 kernel: LSM: Security Framework initializing Feb 12 19:40:53.090401 kernel: SELinux: Initializing. Feb 12 19:40:53.090413 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:40:53.090450 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:40:53.090466 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2) Feb 12 19:40:53.090475 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 12 19:40:53.090483 kernel: signal: max sigframe size: 1776 Feb 12 19:40:53.090491 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:40:53.090499 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:40:53.090508 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:40:53.090516 kernel: x86: Booting SMP configuration: Feb 12 19:40:53.090524 kernel: .... node #0, CPUs: #1 Feb 12 19:40:53.090532 kernel: kvm-clock: cpu 1, msr 42faa041, secondary cpu clock Feb 12 19:40:53.090541 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 19:40:53.090552 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:40:53.090560 kernel: smpboot: Max logical packages: 1 Feb 12 19:40:53.090568 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Feb 12 19:40:53.090576 kernel: devtmpfs: initialized Feb 12 19:40:53.090584 kernel: x86/mm: Memory block size: 128MB Feb 12 19:40:53.090646 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:40:53.090655 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:40:53.090663 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:40:53.090671 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:40:53.090682 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:40:53.090691 kernel: audit: type=2000 audit(1707766851.635:1): state=initialized audit_enabled=0 res=1 Feb 12 19:40:53.090699 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:40:53.090707 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:40:53.090716 kernel: cpuidle: using governor menu Feb 12 19:40:53.090724 kernel: ACPI: bus type PCI registered Feb 12 19:40:53.090732 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:40:53.090740 kernel: dca service started, version 1.12.1 Feb 12 19:40:53.090748 kernel: PCI: Using configuration type 1 for base access Feb 12 19:40:53.090760 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:40:53.090768 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:40:53.090776 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:40:53.090784 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:40:53.090793 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:40:53.090806 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:40:53.090819 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:40:53.090830 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:40:53.090839 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:40:53.090854 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:40:53.090867 kernel: ACPI: Interpreter enabled Feb 12 19:40:53.090878 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:40:53.090887 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:40:53.090895 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:40:53.090903 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:40:53.090912 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:40:53.091195 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:40:53.091402 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 19:40:53.091419 kernel: acpiphp: Slot [3] registered Feb 12 19:40:53.091428 kernel: acpiphp: Slot [4] registered Feb 12 19:40:53.091439 kernel: acpiphp: Slot [5] registered Feb 12 19:40:53.091450 kernel: acpiphp: Slot [6] registered Feb 12 19:40:53.091461 kernel: acpiphp: Slot [7] registered Feb 12 19:40:53.091473 kernel: acpiphp: Slot [8] registered Feb 12 19:40:53.091533 kernel: acpiphp: Slot [9] registered Feb 12 19:40:53.091547 kernel: acpiphp: Slot [10] registered Feb 12 19:40:53.091555 kernel: acpiphp: Slot [11] registered Feb 12 19:40:53.091563 kernel: acpiphp: Slot [12] registered Feb 12 19:40:53.091574 kernel: acpiphp: Slot [13] registered Feb 12 19:40:53.091587 kernel: acpiphp: Slot [14] registered Feb 12 19:40:53.091598 kernel: acpiphp: Slot [15] registered Feb 12 19:40:53.091606 kernel: acpiphp: Slot [16] registered Feb 12 19:40:53.091614 kernel: acpiphp: Slot [17] registered Feb 12 19:40:53.091623 kernel: acpiphp: Slot [18] registered Feb 12 19:40:53.091631 kernel: acpiphp: Slot [19] registered Feb 12 19:40:53.091643 kernel: acpiphp: Slot [20] registered Feb 12 19:40:53.091652 kernel: acpiphp: Slot [21] registered Feb 12 19:40:53.091660 kernel: acpiphp: Slot [22] registered Feb 12 19:40:53.091668 kernel: acpiphp: Slot [23] registered Feb 12 19:40:53.091676 kernel: acpiphp: Slot [24] registered Feb 12 19:40:53.091684 kernel: acpiphp: Slot [25] registered Feb 12 19:40:53.091692 kernel: acpiphp: Slot [26] registered Feb 12 19:40:53.091702 kernel: acpiphp: Slot [27] registered Feb 12 19:40:53.091714 kernel: acpiphp: Slot [28] registered Feb 12 19:40:53.091729 kernel: acpiphp: Slot [29] registered Feb 12 19:40:53.091742 kernel: acpiphp: Slot [30] registered Feb 12 19:40:53.091754 kernel: acpiphp: Slot [31] registered Feb 12 19:40:53.091766 kernel: PCI host bridge to bus 0000:00 Feb 12 19:40:53.092002 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:40:53.092231 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:40:53.092423 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:40:53.092520 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 19:40:53.092645 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 19:40:53.092737 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:40:53.092888 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:40:53.093051 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:40:53.093163 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:40:53.093271 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 12 19:40:53.093398 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:40:53.093524 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:40:53.093652 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:40:53.093791 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:40:53.093943 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 12 19:40:53.095379 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 12 19:40:53.095568 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:40:53.095744 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 19:40:53.095865 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 19:40:53.096059 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 19:40:53.096211 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 19:40:53.096350 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 19:40:53.096511 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 12 19:40:53.096614 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 19:40:53.096714 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:40:53.096862 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:40:53.097002 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 12 19:40:53.097124 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 12 19:40:53.097228 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 19:40:53.097366 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:40:53.097475 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 12 19:40:53.097620 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 12 19:40:53.097748 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 19:40:53.097880 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 12 19:40:53.098045 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 12 19:40:53.098162 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 12 19:40:53.098256 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 19:40:53.098428 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:40:53.098582 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:40:53.098684 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 12 19:40:53.098795 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 19:40:53.103134 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:40:53.103456 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 12 19:40:53.103667 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 12 19:40:53.103830 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 12 19:40:53.108304 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 19:40:53.108489 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 12 19:40:53.108635 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 12 19:40:53.108659 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:40:53.108673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:40:53.108685 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:40:53.108708 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:40:53.108720 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:40:53.108734 kernel: iommu: Default domain type: Translated Feb 12 19:40:53.108747 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:40:53.108930 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:40:53.109086 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:40:53.109189 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:40:53.109203 kernel: vgaarb: loaded Feb 12 19:40:53.109272 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:40:53.109290 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:40:53.109302 kernel: PTP clock support registered Feb 12 19:40:53.109312 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:40:53.109323 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:40:53.109334 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 19:40:53.109345 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 12 19:40:53.109357 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:40:53.109368 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:40:53.109383 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:40:53.109396 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:40:53.109407 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:40:53.109418 kernel: pnp: PnP ACPI init Feb 12 19:40:53.109430 kernel: pnp: PnP ACPI: found 4 devices Feb 12 19:40:53.109440 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:40:53.109451 kernel: NET: Registered PF_INET protocol family Feb 12 19:40:53.109464 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:40:53.109476 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 19:40:53.109491 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:40:53.109502 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:40:53.109540 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 19:40:53.109560 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 19:40:53.109572 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:40:53.109584 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:40:53.109598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:40:53.109610 kernel: NET: Registered PF_XDP protocol family Feb 12 19:40:53.109739 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:40:53.109832 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:40:53.109910 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:40:53.110031 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 19:40:53.110160 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 19:40:53.110316 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:40:53.110446 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:40:53.110535 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:40:53.110551 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 19:40:53.110661 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 33770 usecs Feb 12 19:40:53.110680 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:40:53.110689 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:40:53.110701 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 12 19:40:53.110717 kernel: Initialise system trusted keyrings Feb 12 19:40:53.110742 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 19:40:53.110758 kernel: Key type asymmetric registered Feb 12 19:40:53.110769 kernel: Asymmetric key parser 'x509' registered Feb 12 19:40:53.110786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:40:53.110801 kernel: io scheduler mq-deadline registered Feb 12 19:40:53.110818 kernel: io scheduler kyber registered Feb 12 19:40:53.110846 kernel: io scheduler bfq registered Feb 12 19:40:53.110859 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:40:53.110885 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 19:40:53.110897 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:40:53.110909 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:40:53.110922 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:40:53.110936 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:40:53.110954 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:40:53.110966 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:40:53.110991 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:40:53.111231 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 12 19:40:53.111250 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:40:53.111397 kernel: rtc_cmos 00:03: registered as rtc0 Feb 12 19:40:53.111630 kernel: rtc_cmos 00:03: setting system clock to 2024-02-12T19:40:52 UTC (1707766852) Feb 12 19:40:53.111772 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 12 19:40:53.111791 kernel: intel_pstate: CPU model not supported Feb 12 19:40:53.111844 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:40:53.111853 kernel: Segment Routing with IPv6 Feb 12 19:40:53.111880 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:40:53.111888 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:40:53.111897 kernel: Key type dns_resolver registered Feb 12 19:40:53.111906 kernel: IPI shorthand broadcast: enabled Feb 12 19:40:53.111914 kernel: sched_clock: Marking stable (794028862, 127969158)->(1064803923, -142805903) Feb 12 19:40:53.111927 kernel: registered taskstats version 1 Feb 12 19:40:53.111936 kernel: Loading compiled-in X.509 certificates Feb 12 19:40:53.111959 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 12 19:40:53.111968 kernel: Key type .fscrypt registered Feb 12 19:40:53.115026 kernel: Key type fscrypt-provisioning registered Feb 12 19:40:53.115129 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:40:53.115147 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:40:53.115160 kernel: ima: No architecture policies found Feb 12 19:40:53.115172 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:40:53.115220 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:40:53.115237 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:40:53.115251 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:40:53.115262 kernel: Run /init as init process Feb 12 19:40:53.115270 kernel: with arguments: Feb 12 19:40:53.115283 kernel: /init Feb 12 19:40:53.115312 kernel: with environment: Feb 12 19:40:53.115323 kernel: HOME=/ Feb 12 19:40:53.115331 kernel: TERM=linux Feb 12 19:40:53.115354 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:40:53.115373 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:40:53.115389 systemd[1]: Detected virtualization kvm. Feb 12 19:40:53.115403 systemd[1]: Detected architecture x86-64. Feb 12 19:40:53.115412 systemd[1]: Running in initrd. Feb 12 19:40:53.115420 systemd[1]: No hostname configured, using default hostname. Feb 12 19:40:53.115429 systemd[1]: Hostname set to . Feb 12 19:40:53.115441 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:40:53.115450 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:40:53.115459 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:40:53.115468 systemd[1]: Reached target cryptsetup.target. Feb 12 19:40:53.115476 systemd[1]: Reached target paths.target. Feb 12 19:40:53.115485 systemd[1]: Reached target slices.target. Feb 12 19:40:53.115494 systemd[1]: Reached target swap.target. Feb 12 19:40:53.115502 systemd[1]: Reached target timers.target. Feb 12 19:40:53.115514 systemd[1]: Listening on iscsid.socket. Feb 12 19:40:53.115523 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:40:53.115532 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:40:53.115541 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:40:53.115550 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:40:53.115559 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:40:53.115568 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:40:53.115577 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:40:53.115592 systemd[1]: Reached target sockets.target. Feb 12 19:40:53.115605 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:40:53.115615 systemd[1]: Finished network-cleanup.service. Feb 12 19:40:53.115629 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:40:53.115642 systemd[1]: Starting systemd-journald.service... Feb 12 19:40:53.115656 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:40:53.115672 systemd[1]: Starting systemd-resolved.service... Feb 12 19:40:53.115684 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:40:53.115696 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:40:53.115711 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:40:53.115733 systemd-journald[183]: Journal started Feb 12 19:40:53.115895 systemd-journald[183]: Runtime Journal (/run/log/journal/f5cb553ea7154e9980b834bce5a6252c) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:40:53.099547 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:40:53.149643 systemd[1]: Started systemd-journald.service. Feb 12 19:40:53.149693 kernel: audit: type=1130 audit(1707766853.141:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.122144 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:40:53.155591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:40:53.155619 kernel: audit: type=1130 audit(1707766853.143:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.122156 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:40:53.122188 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:40:53.160196 kernel: Bridge firewalling registered Feb 12 19:40:53.125991 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:40:53.144382 systemd[1]: Started systemd-resolved.service. Feb 12 19:40:53.144858 systemd[1]: Reached target nss-lookup.target. Feb 12 19:40:53.157663 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:40:53.159261 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:40:53.165192 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:40:53.175540 kernel: audit: type=1130 audit(1707766853.143:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.175600 kernel: audit: type=1130 audit(1707766853.166:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.173996 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:40:53.174839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:40:53.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.183021 kernel: audit: type=1130 audit(1707766853.174:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.193011 kernel: SCSI subsystem initialized Feb 12 19:40:53.194092 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:40:53.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.199188 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:40:53.200759 kernel: audit: type=1130 audit(1707766853.194:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.212858 dracut-cmdline[202]: dracut-dracut-053 Feb 12 19:40:53.217016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:40:53.217049 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:40:53.217061 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:40:53.219008 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:40:53.230646 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:40:53.231852 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:40:53.239697 kernel: audit: type=1130 audit(1707766853.233:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.240089 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:40:53.252792 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:40:53.257955 kernel: audit: type=1130 audit(1707766853.252:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.348063 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:40:53.369049 kernel: iscsi: registered transport (tcp) Feb 12 19:40:53.399820 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:40:53.399899 kernel: QLogic iSCSI HBA Driver Feb 12 19:40:53.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.471636 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:40:53.473551 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:40:53.478280 kernel: audit: type=1130 audit(1707766853.472:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.542090 kernel: raid6: avx2x4 gen() 13914 MB/s Feb 12 19:40:53.559056 kernel: raid6: avx2x4 xor() 5881 MB/s Feb 12 19:40:53.576051 kernel: raid6: avx2x2 gen() 13123 MB/s Feb 12 19:40:53.593051 kernel: raid6: avx2x2 xor() 15204 MB/s Feb 12 19:40:53.610045 kernel: raid6: avx2x1 gen() 10750 MB/s Feb 12 19:40:53.627067 kernel: raid6: avx2x1 xor() 13648 MB/s Feb 12 19:40:53.644079 kernel: raid6: sse2x4 gen() 9745 MB/s Feb 12 19:40:53.661056 kernel: raid6: sse2x4 xor() 5601 MB/s Feb 12 19:40:53.679194 kernel: raid6: sse2x2 gen() 8784 MB/s Feb 12 19:40:53.696067 kernel: raid6: sse2x2 xor() 6383 MB/s Feb 12 19:40:53.713188 kernel: raid6: sse2x1 gen() 8676 MB/s Feb 12 19:40:53.730792 kernel: raid6: sse2x1 xor() 5162 MB/s Feb 12 19:40:53.730888 kernel: raid6: using algorithm avx2x4 gen() 13914 MB/s Feb 12 19:40:53.730917 kernel: raid6: .... xor() 5881 MB/s, rmw enabled Feb 12 19:40:53.731689 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:40:53.749030 kernel: xor: automatically using best checksumming function avx Feb 12 19:40:53.893204 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:40:53.910354 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:40:53.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.910000 audit: BPF prog-id=7 op=LOAD Feb 12 19:40:53.910000 audit: BPF prog-id=8 op=LOAD Feb 12 19:40:53.913050 systemd[1]: Starting systemd-udevd.service... Feb 12 19:40:53.932565 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 12 19:40:53.939291 systemd[1]: Started systemd-udevd.service. Feb 12 19:40:53.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:53.944815 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:40:53.964011 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Feb 12 19:40:54.019580 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:40:54.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.022078 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:40:54.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:54.105942 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:40:54.173289 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 12 19:40:54.189108 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:40:54.189180 kernel: GPT:9289727 != 125829119 Feb 12 19:40:54.189193 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:40:54.189204 kernel: GPT:9289727 != 125829119 Feb 12 19:40:54.189215 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:40:54.189225 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:54.197044 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:40:54.210045 kernel: scsi host0: Virtio SCSI HBA Feb 12 19:40:54.219047 kernel: virtio_blk virtio5: [vdb] 948 512-byte logical blocks (485 kB/474 KiB) Feb 12 19:40:54.256025 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:40:54.256151 kernel: AES CTR mode by8 optimization enabled Feb 12 19:40:54.258022 kernel: libata version 3.00 loaded. Feb 12 19:40:54.289162 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:40:54.317052 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (435) Feb 12 19:40:54.317770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:40:54.320357 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:40:54.332534 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:40:54.403294 kernel: scsi host1: ata_piix Feb 12 19:40:54.403599 kernel: scsi host2: ata_piix Feb 12 19:40:54.403798 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 12 19:40:54.403818 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 12 19:40:54.403835 kernel: ACPI: bus type USB registered Feb 12 19:40:54.403851 kernel: usbcore: registered new interface driver usbfs Feb 12 19:40:54.403866 kernel: usbcore: registered new interface driver hub Feb 12 19:40:54.403941 kernel: usbcore: registered new device driver usb Feb 12 19:40:54.408390 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:40:54.418284 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:40:54.422342 systemd[1]: Starting disk-uuid.service... Feb 12 19:40:54.432347 disk-uuid[505]: Primary Header is updated. Feb 12 19:40:54.432347 disk-uuid[505]: Secondary Entries is updated. Feb 12 19:40:54.432347 disk-uuid[505]: Secondary Header is updated. Feb 12 19:40:54.443065 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:54.456004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:54.541010 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 12 19:40:54.550015 kernel: ehci-pci: EHCI PCI platform driver Feb 12 19:40:54.557013 kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 12 19:40:54.590624 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 12 19:40:54.597037 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 12 19:40:54.604311 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 12 19:40:54.604699 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Feb 12 19:40:54.616653 kernel: hub 1-0:1.0: USB hub found Feb 12 19:40:54.617001 kernel: hub 1-0:1.0: 2 ports detected Feb 12 19:40:55.455628 disk-uuid[506]: The operation has completed successfully. Feb 12 19:40:55.456489 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:40:55.502422 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:40:55.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.502579 systemd[1]: Finished disk-uuid.service. Feb 12 19:40:55.518757 systemd[1]: Starting verity-setup.service... Feb 12 19:40:55.547022 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:40:55.613409 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:40:55.615822 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:40:55.617320 systemd[1]: Finished verity-setup.service. Feb 12 19:40:55.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.744015 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:40:55.745554 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:40:55.746148 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:40:55.747141 systemd[1]: Starting ignition-setup.service... Feb 12 19:40:55.748633 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:40:55.769392 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:55.769478 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:40:55.769520 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:40:55.790889 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:40:55.801720 systemd[1]: Finished ignition-setup.service. Feb 12 19:40:55.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.804347 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:40:55.961740 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:40:55.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:55.962000 audit: BPF prog-id=9 op=LOAD Feb 12 19:40:55.964183 systemd[1]: Starting systemd-networkd.service... Feb 12 19:40:55.969886 ignition[600]: Ignition 2.14.0 Feb 12 19:40:55.971344 ignition[600]: Stage: fetch-offline Feb 12 19:40:55.972308 ignition[600]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:55.973296 ignition[600]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:55.981148 ignition[600]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:55.982712 ignition[600]: parsed url from cmdline: "" Feb 12 19:40:55.982913 ignition[600]: no config URL provided Feb 12 19:40:55.983815 ignition[600]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:55.984875 ignition[600]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:55.985508 ignition[600]: failed to fetch config: resource requires networking Feb 12 19:40:55.986705 ignition[600]: Ignition finished successfully Feb 12 19:40:55.988583 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:40:55.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.000873 systemd-networkd[688]: lo: Link UP Feb 12 19:40:56.000887 systemd-networkd[688]: lo: Gained carrier Feb 12 19:40:56.002357 systemd-networkd[688]: Enumeration completed Feb 12 19:40:56.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.002482 systemd[1]: Started systemd-networkd.service. Feb 12 19:40:56.003102 systemd[1]: Reached target network.target. Feb 12 19:40:56.005601 systemd[1]: Starting ignition-fetch.service... Feb 12 19:40:56.006207 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:40:56.007576 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 12 19:40:56.009289 systemd-networkd[688]: eth1: Link UP Feb 12 19:40:56.009295 systemd-networkd[688]: eth1: Gained carrier Feb 12 19:40:56.014434 systemd[1]: Starting iscsiuio.service... Feb 12 19:40:56.021682 ignition[690]: Ignition 2.14.0 Feb 12 19:40:56.017889 systemd-networkd[688]: eth0: Link UP Feb 12 19:40:56.021692 ignition[690]: Stage: fetch Feb 12 19:40:56.017897 systemd-networkd[688]: eth0: Gained carrier Feb 12 19:40:56.021891 ignition[690]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:56.021911 ignition[690]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:56.024183 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:56.024344 ignition[690]: parsed url from cmdline: "" Feb 12 19:40:56.024349 ignition[690]: no config URL provided Feb 12 19:40:56.024356 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:40:56.024365 ignition[690]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:40:56.024399 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 12 19:40:56.045505 systemd[1]: Started iscsiuio.service. Feb 12 19:40:56.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.047914 systemd-networkd[688]: eth0: DHCPv4 address 146.190.38.70/19, gateway 146.190.32.1 acquired from 169.254.169.253 Feb 12 19:40:56.049714 systemd[1]: Starting iscsid.service... Feb 12 19:40:56.055147 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Feb 12 19:40:56.059786 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:56.061305 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:40:56.061305 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:40:56.063810 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:40:56.063810 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:40:56.063810 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:40:56.066752 systemd[1]: Started iscsid.service. Feb 12 19:40:56.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.069493 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:40:56.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.092015 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:40:56.092002 ignition[690]: GET result: OK Feb 12 19:40:56.092688 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:40:56.092114 ignition[690]: parsing config with SHA512: a42e87e38cb44e79c60c411a0f701715417f5afd727f977be82afb1670e3d40f6768896799b1abf649ac59f3e4323cdfcfcd1366ab05b57f64dea152987c4738 Feb 12 19:40:56.093144 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:40:56.093785 systemd[1]: Reached target remote-fs.target. Feb 12 19:40:56.097807 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:40:56.111708 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:40:56.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.129049 unknown[690]: fetched base config from "system" Feb 12 19:40:56.129060 unknown[690]: fetched base config from "system" Feb 12 19:40:56.129732 ignition[690]: fetch: fetch complete Feb 12 19:40:56.129066 unknown[690]: fetched user config from "digitalocean" Feb 12 19:40:56.129824 ignition[690]: fetch: fetch passed Feb 12 19:40:56.129898 ignition[690]: Ignition finished successfully Feb 12 19:40:56.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.132880 systemd[1]: Finished ignition-fetch.service. Feb 12 19:40:56.134479 systemd[1]: Starting ignition-kargs.service... Feb 12 19:40:56.147951 ignition[713]: Ignition 2.14.0 Feb 12 19:40:56.147966 ignition[713]: Stage: kargs Feb 12 19:40:56.148109 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:56.148128 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:56.149938 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:56.151336 ignition[713]: kargs: kargs passed Feb 12 19:40:56.151413 ignition[713]: Ignition finished successfully Feb 12 19:40:56.152726 systemd[1]: Finished ignition-kargs.service. Feb 12 19:40:56.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.155047 systemd[1]: Starting ignition-disks.service... Feb 12 19:40:56.166217 ignition[719]: Ignition 2.14.0 Feb 12 19:40:56.166238 ignition[719]: Stage: disks Feb 12 19:40:56.166395 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:56.166420 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:56.168855 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:56.171434 ignition[719]: disks: disks passed Feb 12 19:40:56.171511 ignition[719]: Ignition finished successfully Feb 12 19:40:56.172948 systemd[1]: Finished ignition-disks.service. Feb 12 19:40:56.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.173909 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:40:56.174660 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:40:56.175456 systemd[1]: Reached target local-fs.target. Feb 12 19:40:56.176292 systemd[1]: Reached target sysinit.target. Feb 12 19:40:56.177099 systemd[1]: Reached target basic.target. Feb 12 19:40:56.179546 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:40:56.199466 systemd-fsck[727]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:40:56.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.203444 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:40:56.204903 systemd[1]: Mounting sysroot.mount... Feb 12 19:40:56.219630 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:40:56.218583 systemd[1]: Mounted sysroot.mount. Feb 12 19:40:56.219160 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:40:56.222156 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:40:56.224251 systemd[1]: Starting flatcar-digitalocean-network.service... Feb 12 19:40:56.226916 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:40:56.228178 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:40:56.229184 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:40:56.232748 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:40:56.236674 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:40:56.245677 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:40:56.266164 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:40:56.278469 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:40:56.287000 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:40:56.371589 coreos-metadata[734]: Feb 12 19:40:56.371 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:40:56.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.381775 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:40:56.383590 systemd[1]: Starting ignition-mount.service... Feb 12 19:40:56.388900 systemd[1]: Starting sysroot-boot.service... Feb 12 19:40:56.396368 coreos-metadata[734]: Feb 12 19:40:56.396 INFO Fetch successful Feb 12 19:40:56.400994 coreos-metadata[733]: Feb 12 19:40:56.400 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:40:56.405717 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 19:40:56.414532 coreos-metadata[733]: Feb 12 19:40:56.412 INFO Fetch successful Feb 12 19:40:56.415213 coreos-metadata[734]: Feb 12 19:40:56.412 INFO wrote hostname ci-3510.3.2-e-e0f180cc85 to /sysroot/etc/hostname Feb 12 19:40:56.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.417884 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:40:56.421090 ignition[786]: INFO : Ignition 2.14.0 Feb 12 19:40:56.421090 ignition[786]: INFO : Stage: mount Feb 12 19:40:56.422547 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:56.422547 ignition[786]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:56.425187 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:56.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.427137 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 12 19:40:56.427262 systemd[1]: Finished flatcar-digitalocean-network.service. Feb 12 19:40:56.434104 ignition[786]: INFO : mount: mount passed Feb 12 19:40:56.434696 ignition[786]: INFO : Ignition finished successfully Feb 12 19:40:56.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.435304 systemd[1]: Finished ignition-mount.service. Feb 12 19:40:56.452254 systemd[1]: Finished sysroot-boot.service. Feb 12 19:40:56.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:56.642416 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:40:56.651039 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (795) Feb 12 19:40:56.653753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:40:56.653825 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:40:56.653855 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:40:56.663696 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:40:56.674844 systemd[1]: Starting ignition-files.service... Feb 12 19:40:56.698405 ignition[815]: INFO : Ignition 2.14.0 Feb 12 19:40:56.698405 ignition[815]: INFO : Stage: files Feb 12 19:40:56.699636 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:56.699636 ignition[815]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:56.701079 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:56.707242 ignition[815]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:40:56.708086 ignition[815]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:40:56.708086 ignition[815]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:40:56.714349 ignition[815]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:40:56.715552 ignition[815]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:40:56.718081 unknown[815]: wrote ssh authorized keys file for user: core Feb 12 19:40:56.718947 ignition[815]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:40:56.719967 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:40:56.719967 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 19:40:57.225474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:40:57.334544 systemd-networkd[688]: eth1: Gained IPv6LL Feb 12 19:40:57.494815 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 19:40:57.496167 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:40:57.497224 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:40:57.498278 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:40:57.718342 systemd-networkd[688]: eth0: Gained IPv6LL Feb 12 19:40:57.899573 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:40:58.030267 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 19:40:58.031565 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:40:58.031565 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:40:58.031565 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:40:58.096394 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:40:58.403562 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 19:40:58.403562 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:40:58.406056 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:40:58.406056 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:40:58.460116 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:40:59.053234 ignition[815]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 19:40:59.054687 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:40:59.055460 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:40:59.056353 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:40:59.057054 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:40:59.057989 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:40:59.059141 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:40:59.060117 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:40:59.060117 ignition[815]: INFO : files: op(a): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:40:59.060117 ignition[815]: INFO : files: op(a): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:40:59.060117 ignition[815]: INFO : files: op(b): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(b): op(c): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(b): op(c): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(b): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(d): [started] processing unit "prepare-critools.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(d): op(e): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(d): op(e): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(d): [finished] processing unit "prepare-critools.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(10): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(11): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:40:59.064812 ignition[815]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:40:59.080594 kernel: kauditd_printk_skb: 28 callbacks suppressed Feb 12 19:40:59.080624 kernel: audit: type=1130 audit(1707766859.072:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.071911 systemd[1]: Finished ignition-files.service. Feb 12 19:40:59.083455 ignition[815]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:40:59.083455 ignition[815]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:40:59.083455 ignition[815]: INFO : files: files passed Feb 12 19:40:59.083455 ignition[815]: INFO : Ignition finished successfully Feb 12 19:40:59.074896 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:40:59.081907 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:40:59.089566 initrd-setup-root-after-ignition[840]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:40:59.083048 systemd[1]: Starting ignition-quench.service... Feb 12 19:40:59.097443 kernel: audit: type=1130 audit(1707766859.090:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.097490 kernel: audit: type=1131 audit(1707766859.090:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.089747 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:40:59.100864 kernel: audit: type=1130 audit(1707766859.096:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.089876 systemd[1]: Finished ignition-quench.service. Feb 12 19:40:59.091499 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:40:59.098136 systemd[1]: Reached target ignition-complete.target. Feb 12 19:40:59.102732 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:40:59.124574 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:40:59.124727 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:40:59.141152 kernel: audit: type=1130 audit(1707766859.124:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.141192 kernel: audit: type=1131 audit(1707766859.124:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.126155 systemd[1]: Reached target initrd-fs.target. Feb 12 19:40:59.141765 systemd[1]: Reached target initrd.target. Feb 12 19:40:59.142508 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:40:59.143782 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:40:59.160906 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:40:59.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.163225 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:40:59.168077 kernel: audit: type=1130 audit(1707766859.160:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.177854 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:40:59.179382 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:40:59.180735 systemd[1]: Stopped target timers.target. Feb 12 19:40:59.181975 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:40:59.182605 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:40:59.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.186714 systemd[1]: Stopped target initrd.target. Feb 12 19:40:59.187748 kernel: audit: type=1131 audit(1707766859.182:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.187447 systemd[1]: Stopped target basic.target. Feb 12 19:40:59.188295 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:40:59.189140 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:40:59.190184 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:40:59.191110 systemd[1]: Stopped target remote-fs.target. Feb 12 19:40:59.192282 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:40:59.193085 systemd[1]: Stopped target sysinit.target. Feb 12 19:40:59.194159 systemd[1]: Stopped target local-fs.target. Feb 12 19:40:59.195011 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:40:59.195771 systemd[1]: Stopped target swap.target. Feb 12 19:40:59.196782 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:40:59.197060 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:40:59.201223 kernel: audit: type=1131 audit(1707766859.197:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.198584 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:40:59.201955 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:40:59.202310 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:40:59.206492 kernel: audit: type=1131 audit(1707766859.202:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.203447 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:40:59.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.203669 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:40:59.207182 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:40:59.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.207470 systemd[1]: Stopped ignition-files.service. Feb 12 19:40:59.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.208691 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:40:59.208876 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:40:59.211276 systemd[1]: Stopping ignition-mount.service... Feb 12 19:40:59.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.215962 systemd[1]: Stopping iscsiuio.service... Feb 12 19:40:59.217444 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:40:59.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.217970 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:40:59.218207 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:40:59.218821 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:40:59.232411 ignition[853]: INFO : Ignition 2.14.0 Feb 12 19:40:59.232411 ignition[853]: INFO : Stage: umount Feb 12 19:40:59.232411 ignition[853]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:40:59.232411 ignition[853]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:40:59.218971 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:40:59.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.222467 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:40:59.222641 systemd[1]: Stopped iscsiuio.service. Feb 12 19:40:59.224793 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:40:59.224916 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:40:59.241997 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:40:59.244448 ignition[853]: INFO : umount: umount passed Feb 12 19:40:59.245236 ignition[853]: INFO : Ignition finished successfully Feb 12 19:40:59.247743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:40:59.248335 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:40:59.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.248424 systemd[1]: Stopped ignition-mount.service. Feb 12 19:40:59.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.249022 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:40:59.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.249080 systemd[1]: Stopped ignition-disks.service. Feb 12 19:40:59.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.249903 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:40:59.249950 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:40:59.250617 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:40:59.250657 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:40:59.253602 systemd[1]: Stopped target network.target. Feb 12 19:40:59.254892 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:40:59.255054 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:40:59.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.259817 systemd[1]: Stopped target paths.target. Feb 12 19:40:59.260328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:40:59.264110 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:40:59.264678 systemd[1]: Stopped target slices.target. Feb 12 19:40:59.265094 systemd[1]: Stopped target sockets.target. Feb 12 19:40:59.265737 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:40:59.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.265775 systemd[1]: Closed iscsid.socket. Feb 12 19:40:59.266438 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:40:59.266494 systemd[1]: Closed iscsiuio.socket. Feb 12 19:40:59.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.267146 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:40:59.267212 systemd[1]: Stopped ignition-setup.service. Feb 12 19:40:59.268182 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:40:59.269115 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:40:59.270215 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:40:59.270331 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:40:59.270851 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:40:59.270904 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:40:59.273045 systemd-networkd[688]: eth1: DHCPv6 lease lost Feb 12 19:40:59.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.276118 systemd-networkd[688]: eth0: DHCPv6 lease lost Feb 12 19:40:59.277695 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:40:59.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.280000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:40:59.280000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:40:59.277818 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:40:59.279744 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:40:59.279906 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:40:59.281812 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:40:59.281865 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:40:59.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.285198 systemd[1]: Stopping network-cleanup.service... Feb 12 19:40:59.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.286443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:40:59.286518 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:40:59.287058 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:40:59.287113 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:40:59.287864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:40:59.287917 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:40:59.288573 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:40:59.295646 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:40:59.307363 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:40:59.307601 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:40:59.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.309568 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:40:59.309712 systemd[1]: Stopped network-cleanup.service. Feb 12 19:40:59.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.311144 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:40:59.311208 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:40:59.312074 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:40:59.312113 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:40:59.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.312748 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:40:59.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.312797 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:40:59.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.314009 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:40:59.314075 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:40:59.314724 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:40:59.314781 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:40:59.316481 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:40:59.317460 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:40:59.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.317574 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:40:59.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.326852 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:40:59.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.326940 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:40:59.327560 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:40:59.327659 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:40:59.329703 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:40:59.330327 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:40:59.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:40:59.330424 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:40:59.331402 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:40:59.333141 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:40:59.347845 systemd[1]: Switching root. Feb 12 19:40:59.379480 iscsid[698]: iscsid shutting down. Feb 12 19:40:59.380255 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 12 19:40:59.380343 systemd-journald[183]: Journal stopped Feb 12 19:41:04.191216 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:41:04.191291 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:41:04.191306 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:41:04.191325 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:41:04.191338 kernel: SELinux: policy capability open_perms=1 Feb 12 19:41:04.191351 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:41:04.191379 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:41:04.191392 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:41:04.191405 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:41:04.191417 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:41:04.191428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:41:04.191441 systemd[1]: Successfully loaded SELinux policy in 53.504ms. Feb 12 19:41:04.191468 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.655ms. Feb 12 19:41:04.191486 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:41:04.191502 systemd[1]: Detected virtualization kvm. Feb 12 19:41:04.191519 systemd[1]: Detected architecture x86-64. Feb 12 19:41:04.191530 systemd[1]: Detected first boot. Feb 12 19:41:04.191549 systemd[1]: Hostname set to . Feb 12 19:41:04.191562 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:41:04.191574 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:41:04.191586 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:41:04.191599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:41:04.191615 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:41:04.191630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:41:04.191648 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:41:04.191660 systemd[1]: Stopped iscsid.service. Feb 12 19:41:04.191673 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:41:04.191686 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:41:04.191698 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:41:04.191712 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:41:04.191750 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:41:04.191771 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 19:41:04.191793 systemd[1]: Created slice system-getty.slice. Feb 12 19:41:04.191810 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:41:04.191828 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:41:04.191845 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:41:04.191859 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:41:04.191915 systemd[1]: Created slice user.slice. Feb 12 19:41:04.191933 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:41:04.191969 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:41:04.192013 systemd[1]: Set up automount boot.automount. Feb 12 19:41:04.192026 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:41:04.192040 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:41:04.192052 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:41:04.192064 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:41:04.192080 systemd[1]: Reached target integritysetup.target. Feb 12 19:41:04.192093 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:41:04.192108 systemd[1]: Reached target remote-fs.target. Feb 12 19:41:04.192123 systemd[1]: Reached target slices.target. Feb 12 19:41:04.192135 systemd[1]: Reached target swap.target. Feb 12 19:41:04.192149 systemd[1]: Reached target torcx.target. Feb 12 19:41:04.192161 systemd[1]: Reached target veritysetup.target. Feb 12 19:41:04.192173 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:41:04.192186 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:41:04.192201 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:41:04.192214 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:41:04.192226 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:41:04.192238 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:41:04.192251 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:41:04.192263 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:41:04.192275 systemd[1]: Mounting media.mount... Feb 12 19:41:04.192288 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:04.192300 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:41:04.192313 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:41:04.192327 systemd[1]: Mounting tmp.mount... Feb 12 19:41:04.192370 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:41:04.192383 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:41:04.192395 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:41:04.192410 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:41:04.192423 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:41:04.192435 systemd[1]: Starting modprobe@drm.service... Feb 12 19:41:04.192447 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:41:04.192464 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:41:04.192485 systemd[1]: Starting modprobe@loop.service... Feb 12 19:41:04.192503 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:41:04.192519 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:41:04.192537 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:41:04.192555 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:41:04.192573 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:41:04.192592 systemd[1]: Stopped systemd-journald.service. Feb 12 19:41:04.192610 systemd[1]: Starting systemd-journald.service... Feb 12 19:41:04.192630 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:41:04.192643 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:41:04.192694 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:41:04.192706 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:41:04.192718 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:41:04.192731 systemd[1]: Stopped verity-setup.service. Feb 12 19:41:04.192743 kernel: kauditd_printk_skb: 81 callbacks suppressed Feb 12 19:41:04.192756 kernel: audit: type=1131 audit(1707766864.098:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.192771 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:04.192786 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:41:04.192798 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:41:04.192810 systemd[1]: Mounted media.mount. Feb 12 19:41:04.192822 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:41:04.192836 kernel: loop: module loaded Feb 12 19:41:04.192848 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:41:04.192859 systemd[1]: Mounted tmp.mount. Feb 12 19:41:04.192875 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:41:04.192888 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:41:04.192902 kernel: audit: type=1130 audit(1707766864.126:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.192914 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:41:04.192927 kernel: audit: type=1130 audit(1707766864.133:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.192939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:41:04.192954 kernel: audit: type=1131 audit(1707766864.133:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.192966 kernel: fuse: init (API version 7.34) Feb 12 19:41:04.192988 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:41:04.193001 kernel: audit: type=1130 audit(1707766864.145:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.193012 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:41:04.193024 kernel: audit: type=1131 audit(1707766864.145:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.193037 systemd[1]: Finished modprobe@drm.service. Feb 12 19:41:04.193059 kernel: audit: type=1130 audit(1707766864.154:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.193073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:41:04.193086 kernel: audit: type=1131 audit(1707766864.154:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.193098 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:41:04.193110 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:41:04.193122 kernel: audit: type=1130 audit(1707766864.165:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.193137 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:41:04.193149 kernel: audit: type=1131 audit(1707766864.165:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.198388 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:41:04.198417 systemd[1]: Finished modprobe@loop.service. Feb 12 19:41:04.198431 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:41:04.198444 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:41:04.198456 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:41:04.198469 systemd[1]: Reached target network-pre.target. Feb 12 19:41:04.198481 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:41:04.198502 systemd-journald[955]: Journal started Feb 12 19:41:04.198564 systemd-journald[955]: Runtime Journal (/run/log/journal/f5cb553ea7154e9980b834bce5a6252c) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:40:59.542000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:40:59.607000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:40:59.607000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:40:59.608000 audit: BPF prog-id=10 op=LOAD Feb 12 19:40:59.608000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:40:59.608000 audit: BPF prog-id=11 op=LOAD Feb 12 19:40:59.608000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:40:59.733000 audit[885]: AVC avc: denied { associate } for pid=885 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:40:59.733000 audit[885]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=868 pid=885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:40:59.733000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:41:04.203651 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:41:04.203722 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:40:59.734000 audit[885]: AVC avc: denied { associate } for pid=885 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:40:59.734000 audit[885]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=868 pid=885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:40:59.734000 audit: CWD cwd="/" Feb 12 19:40:59.734000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:40:59.734000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:40:59.734000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:41:03.925000 audit: BPF prog-id=12 op=LOAD Feb 12 19:41:03.925000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:41:03.925000 audit: BPF prog-id=13 op=LOAD Feb 12 19:41:03.925000 audit: BPF prog-id=14 op=LOAD Feb 12 19:41:03.925000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:41:03.925000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:41:03.927000 audit: BPF prog-id=15 op=LOAD Feb 12 19:41:03.927000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:41:03.927000 audit: BPF prog-id=16 op=LOAD Feb 12 19:41:03.927000 audit: BPF prog-id=17 op=LOAD Feb 12 19:41:03.927000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:41:03.927000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:41:03.928000 audit: BPF prog-id=18 op=LOAD Feb 12 19:41:03.928000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:41:03.928000 audit: BPF prog-id=19 op=LOAD Feb 12 19:41:03.928000 audit: BPF prog-id=20 op=LOAD Feb 12 19:41:03.928000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:41:03.928000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:41:03.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:03.938000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:41:04.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.067000 audit: BPF prog-id=21 op=LOAD Feb 12 19:41:04.067000 audit: BPF prog-id=22 op=LOAD Feb 12 19:41:04.067000 audit: BPF prog-id=23 op=LOAD Feb 12 19:41:04.067000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:41:04.067000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:41:04.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.188000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:41:04.188000 audit[955]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdbb8eb740 a2=4000 a3=7ffdbb8eb7dc items=0 ppid=1 pid=955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:04.188000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:40:59.731010 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:41:03.924040 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:40:59.731618 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:41:03.924058 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:40:59.731650 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:41:03.930445 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:40:59.731705 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:40:59.731723 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:40:59.731788 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:40:59.731806 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:40:59.732132 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:40:59.732192 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:40:59.732212 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:40:59.732972 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:40:59.733037 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:40:59.733068 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:40:59.733091 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:40:59.733123 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:40:59.733146 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:40:59Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:41:03.203125 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:41:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:41:03.203678 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:41:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:41:03.203966 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:41:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:41:04.217026 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:41:04.217104 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:41:03.204447 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:41:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:41:03.204534 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:41:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:41:03.204642 /usr/lib/systemd/system-generators/torcx-generator[885]: time="2024-02-12T19:41:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:41:04.219055 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:41:04.235433 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:41:04.235512 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:41:04.235545 systemd[1]: Started systemd-journald.service. Feb 12 19:41:04.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.234575 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:41:04.236043 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:41:04.240647 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:41:04.252325 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:41:04.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.253031 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:41:04.254240 systemd-journald[955]: Time spent on flushing to /var/log/journal/f5cb553ea7154e9980b834bce5a6252c is 48.360ms for 1179 entries. Feb 12 19:41:04.254240 systemd-journald[955]: System Journal (/var/log/journal/f5cb553ea7154e9980b834bce5a6252c) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:41:04.327311 systemd-journald[955]: Received client request to flush runtime journal. Feb 12 19:41:04.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.265497 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:41:04.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.299094 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:41:04.301532 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:41:04.318177 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:41:04.320600 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:41:04.328632 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:41:04.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.342865 udevadm[995]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 19:41:04.336541 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:41:04.338773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:41:04.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:04.394274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:41:05.005815 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:41:05.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.006000 audit: BPF prog-id=24 op=LOAD Feb 12 19:41:05.006000 audit: BPF prog-id=25 op=LOAD Feb 12 19:41:05.006000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:41:05.006000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:41:05.008727 systemd[1]: Starting systemd-udevd.service... Feb 12 19:41:05.031767 systemd-udevd[998]: Using default interface naming scheme 'v252'. Feb 12 19:41:05.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.066000 audit: BPF prog-id=26 op=LOAD Feb 12 19:41:05.066114 systemd[1]: Started systemd-udevd.service. Feb 12 19:41:05.068780 systemd[1]: Starting systemd-networkd.service... Feb 12 19:41:05.077000 audit: BPF prog-id=27 op=LOAD Feb 12 19:41:05.077000 audit: BPF prog-id=28 op=LOAD Feb 12 19:41:05.077000 audit: BPF prog-id=29 op=LOAD Feb 12 19:41:05.079451 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:41:05.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.128639 systemd[1]: Started systemd-userdbd.service. Feb 12 19:41:05.151179 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:05.151448 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:41:05.153272 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:41:05.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.161209 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:41:05.163232 systemd[1]: Starting modprobe@loop.service... Feb 12 19:41:05.163798 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:41:05.163934 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:41:05.164082 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:41:05.164689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:41:05.164924 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:41:05.165797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:41:05.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.170246 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:41:05.172954 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:41:05.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.175725 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:41:05.175881 systemd[1]: Finished modprobe@loop.service. Feb 12 19:41:05.177200 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:41:05.183566 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:41:05.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.250610 systemd-networkd[1005]: lo: Link UP Feb 12 19:41:05.250625 systemd-networkd[1005]: lo: Gained carrier Feb 12 19:41:05.251281 systemd-networkd[1005]: Enumeration completed Feb 12 19:41:05.251409 systemd[1]: Started systemd-networkd.service. Feb 12 19:41:05.252682 systemd-networkd[1005]: eth1: Configuring with /run/systemd/network/10-fa:25:5b:c7:bb:8f.network. Feb 12 19:41:05.254276 systemd-networkd[1005]: eth0: Configuring with /run/systemd/network/10-92:59:51:05:29:bb.network. Feb 12 19:41:05.255524 systemd-networkd[1005]: eth1: Link UP Feb 12 19:41:05.255703 systemd-networkd[1005]: eth1: Gained carrier Feb 12 19:41:05.261464 systemd-networkd[1005]: eth0: Link UP Feb 12 19:41:05.261477 systemd-networkd[1005]: eth0: Gained carrier Feb 12 19:41:05.276000 audit[1003]: AVC avc: denied { confidentiality } for pid=1003 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:41:05.281001 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:41:05.276000 audit[1003]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b5fdbff1a0 a1=32194 a2=7fc6fe0a2bc5 a3=5 items=108 ppid=998 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:05.276000 audit: CWD cwd="/" Feb 12 19:41:05.276000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=1 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=2 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=3 name=(null) inode=14104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.286929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:41:05.276000 audit: PATH item=4 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=5 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=6 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=7 name=(null) inode=14106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=8 name=(null) inode=14106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=9 name=(null) inode=14107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=10 name=(null) inode=14106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=11 name=(null) inode=14108 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=12 name=(null) inode=14106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=13 name=(null) inode=14109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=14 name=(null) inode=14106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=15 name=(null) inode=14110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=16 name=(null) inode=14106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=17 name=(null) inode=14111 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=18 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=19 name=(null) inode=14112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=20 name=(null) inode=14112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=21 name=(null) inode=14113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=22 name=(null) inode=14112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=23 name=(null) inode=14114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=24 name=(null) inode=14112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=25 name=(null) inode=14115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=26 name=(null) inode=14112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=27 name=(null) inode=14116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=28 name=(null) inode=14112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=29 name=(null) inode=14117 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=30 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=31 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=32 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=33 name=(null) inode=14119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=34 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=35 name=(null) inode=14120 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=36 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=37 name=(null) inode=14121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=38 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=39 name=(null) inode=14122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=40 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=41 name=(null) inode=14123 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=42 name=(null) inode=14103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=43 name=(null) inode=14124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=44 name=(null) inode=14124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=45 name=(null) inode=14125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=46 name=(null) inode=14124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=47 name=(null) inode=14126 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=48 name=(null) inode=14124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=49 name=(null) inode=14127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=50 name=(null) inode=14124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=51 name=(null) inode=14128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=52 name=(null) inode=14124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=53 name=(null) inode=14129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=55 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=56 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=57 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=58 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=59 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=60 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=61 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=62 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=63 name=(null) inode=14134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=64 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=65 name=(null) inode=14135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=66 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=67 name=(null) inode=14136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=68 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=69 name=(null) inode=14137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=70 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=71 name=(null) inode=14138 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=72 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=73 name=(null) inode=14139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=74 name=(null) inode=14139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=75 name=(null) inode=14140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=76 name=(null) inode=14139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=77 name=(null) inode=14141 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=78 name=(null) inode=14139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=79 name=(null) inode=14142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=80 name=(null) inode=14139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=81 name=(null) inode=14143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=82 name=(null) inode=14139 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=83 name=(null) inode=14144 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=84 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=85 name=(null) inode=14145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=86 name=(null) inode=14145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=87 name=(null) inode=14146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=88 name=(null) inode=14145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=89 name=(null) inode=14147 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=90 name=(null) inode=14145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=91 name=(null) inode=14148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=92 name=(null) inode=14145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=93 name=(null) inode=14149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=94 name=(null) inode=14145 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=95 name=(null) inode=14150 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=96 name=(null) inode=14130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=97 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=98 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=99 name=(null) inode=14152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=100 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=101 name=(null) inode=14153 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=102 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=103 name=(null) inode=14154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=104 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=105 name=(null) inode=14155 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=106 name=(null) inode=14151 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PATH item=107 name=(null) inode=14156 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:41:05.276000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:41:05.300006 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 19:41:05.303021 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:41:05.322054 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:41:05.363006 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:41:05.489016 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:41:05.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.506086 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:41:05.508691 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:41:05.531699 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:41:05.561557 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:41:05.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.562255 systemd[1]: Reached target cryptsetup.target. Feb 12 19:41:05.564348 systemd[1]: Starting lvm2-activation.service... Feb 12 19:41:05.570463 lvm[1037]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:41:05.595553 systemd[1]: Finished lvm2-activation.service. Feb 12 19:41:05.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.596193 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:41:05.598747 systemd[1]: Mounting media-configdrive.mount... Feb 12 19:41:05.599190 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:41:05.599237 systemd[1]: Reached target machines.target. Feb 12 19:41:05.601022 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:41:05.618008 kernel: ISO 9660 Extensions: RRIP_1991A Feb 12 19:41:05.619502 systemd[1]: Mounted media-configdrive.mount. Feb 12 19:41:05.620238 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:41:05.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.620863 systemd[1]: Reached target local-fs.target. Feb 12 19:41:05.622919 systemd[1]: Starting ldconfig.service... Feb 12 19:41:05.623958 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:41:05.624147 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:41:05.625896 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:41:05.628185 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:41:05.629703 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:41:05.629771 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:41:05.631907 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:41:05.642286 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1043 (bootctl) Feb 12 19:41:05.644330 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:41:05.657238 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:41:05.659573 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:41:05.661878 systemd-tmpfiles[1045]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:41:05.812782 systemd-fsck[1049]: fsck.fat 4.2 (2021-01-31) Feb 12 19:41:05.812782 systemd-fsck[1049]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 19:41:05.814789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:41:05.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.817437 systemd[1]: Mounting boot.mount... Feb 12 19:41:05.836675 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:41:05.837810 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:41:05.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.848717 systemd[1]: Mounted boot.mount. Feb 12 19:41:05.872848 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:41:05.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.969018 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:41:05.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:05.972551 systemd[1]: Starting audit-rules.service... Feb 12 19:41:05.975558 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:41:05.979322 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:41:05.982000 audit: BPF prog-id=30 op=LOAD Feb 12 19:41:05.988000 audit: BPF prog-id=31 op=LOAD Feb 12 19:41:05.987935 systemd[1]: Starting systemd-resolved.service... Feb 12 19:41:05.992424 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:41:05.998689 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:41:06.010102 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:41:06.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:06.010832 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:41:06.018000 audit[1057]: SYSTEM_BOOT pid=1057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:41:06.025999 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:41:06.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:41:06.087000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:41:06.087000 audit[1072]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea0f0af80 a2=420 a3=0 items=0 ppid=1052 pid=1072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:41:06.087000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:41:06.088830 augenrules[1072]: No rules Feb 12 19:41:06.089218 systemd[1]: Finished audit-rules.service. Feb 12 19:41:06.094456 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:41:06.109951 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:41:06.110703 systemd[1]: Reached target time-set.target. Feb 12 19:41:06.118332 ldconfig[1042]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:41:06.128130 systemd[1]: Finished ldconfig.service. Feb 12 19:41:06.130683 systemd[1]: Starting systemd-update-done.service... Feb 12 19:41:06.142167 systemd[1]: Finished systemd-update-done.service. Feb 12 19:41:06.150151 systemd-resolved[1055]: Positive Trust Anchors: Feb 12 19:41:06.150551 systemd-resolved[1055]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:41:06.150661 systemd-resolved[1055]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:41:06.158082 systemd-resolved[1055]: Using system hostname 'ci-3510.3.2-e-e0f180cc85'. Feb 12 19:41:06.160702 systemd[1]: Started systemd-resolved.service. Feb 12 19:41:06.161485 systemd[1]: Reached target network.target. Feb 12 19:41:06.161925 systemd[1]: Reached target nss-lookup.target. Feb 12 19:41:06.162347 systemd[1]: Reached target sysinit.target. Feb 12 19:41:06.162839 systemd[1]: Started motdgen.path. Feb 12 19:41:06.163233 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:41:06.163821 systemd[1]: Started logrotate.timer. Feb 12 19:41:06.164287 systemd[1]: Started mdadm.timer. Feb 12 19:41:06.164599 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:41:06.164925 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:41:06.164959 systemd[1]: Reached target paths.target. Feb 12 19:41:06.165285 systemd[1]: Reached target timers.target. Feb 12 19:41:06.166142 systemd[1]: Listening on dbus.socket. Feb 12 19:41:06.168218 systemd[1]: Starting docker.socket... Feb 12 19:41:06.172640 systemd[1]: Listening on sshd.socket. Feb 12 19:41:06.173601 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:41:06.174316 systemd[1]: Listening on docker.socket. Feb 12 19:41:06.174913 systemd[1]: Reached target sockets.target. Feb 12 19:41:06.175354 systemd[1]: Reached target basic.target. Feb 12 19:41:06.175858 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:41:06.175896 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:41:06.177692 systemd[1]: Starting containerd.service... Feb 12 19:41:06.179398 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 19:41:06.183371 systemd[1]: Starting dbus.service... Feb 12 19:41:06.185887 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:41:06.189783 systemd[1]: Starting extend-filesystems.service... Feb 12 19:41:06.190326 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:41:06.193738 systemd[1]: Starting motdgen.service... Feb 12 19:41:06.195349 systemd-timesyncd[1056]: Contacted time server 50.205.57.38:123 (0.flatcar.pool.ntp.org). Feb 12 19:41:06.195416 systemd-timesyncd[1056]: Initial clock synchronization to Mon 2024-02-12 19:41:06.242427 UTC. Feb 12 19:41:06.196959 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:41:06.200812 systemd[1]: Starting prepare-critools.service... Feb 12 19:41:06.203896 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:41:06.207616 systemd[1]: Starting sshd-keygen.service... Feb 12 19:41:06.215300 systemd[1]: Starting systemd-logind.service... Feb 12 19:41:06.216308 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:41:06.216443 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:41:06.218206 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:41:06.219556 systemd[1]: Starting update-engine.service... Feb 12 19:41:06.223397 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:41:06.239354 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:41:06.239580 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:41:06.245113 jq[1101]: true Feb 12 19:41:06.248840 systemd[1]: Created slice system-sshd.slice. Feb 12 19:41:06.258317 jq[1111]: true Feb 12 19:41:06.263544 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:41:06.263806 systemd[1]: Finished motdgen.service. Feb 12 19:41:06.267657 jq[1086]: false Feb 12 19:41:06.267205 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:41:06.267431 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:41:06.282666 dbus-daemon[1085]: [system] SELinux support is enabled Feb 12 19:41:06.282861 systemd[1]: Started dbus.service. Feb 12 19:41:06.285950 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:41:06.285993 systemd[1]: Reached target system-config.target. Feb 12 19:41:06.286422 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:41:06.288200 systemd[1]: Starting user-configdrive.service... Feb 12 19:41:06.293916 tar[1108]: crictl Feb 12 19:41:06.297671 tar[1112]: ./ Feb 12 19:41:06.298064 tar[1112]: ./loopback Feb 12 19:41:06.318310 extend-filesystems[1087]: Found vda Feb 12 19:41:06.319464 extend-filesystems[1087]: Found vda1 Feb 12 19:41:06.320078 extend-filesystems[1087]: Found vda2 Feb 12 19:41:06.323197 extend-filesystems[1087]: Found vda3 Feb 12 19:41:06.324094 extend-filesystems[1087]: Found usr Feb 12 19:41:06.324681 extend-filesystems[1087]: Found vda4 Feb 12 19:41:06.325249 extend-filesystems[1087]: Found vda6 Feb 12 19:41:06.326310 extend-filesystems[1087]: Found vda7 Feb 12 19:41:06.326310 extend-filesystems[1087]: Found vda9 Feb 12 19:41:06.326310 extend-filesystems[1087]: Checking size of /dev/vda9 Feb 12 19:41:06.362523 coreos-cloudinit[1117]: 2024/02/12 19:41:06 Checking availability of "cloud-drive" Feb 12 19:41:06.362523 coreos-cloudinit[1117]: 2024/02/12 19:41:06 Fetching user-data from datasource of type "cloud-drive" Feb 12 19:41:06.362523 coreos-cloudinit[1117]: 2024/02/12 19:41:06 Attempting to read from "/media/configdrive/openstack/latest/user_data" Feb 12 19:41:06.362523 coreos-cloudinit[1117]: 2024/02/12 19:41:06 Fetching meta-data from datasource of type "cloud-drive" Feb 12 19:41:06.362523 coreos-cloudinit[1117]: 2024/02/12 19:41:06 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json" Feb 12 19:41:06.361819 systemd-networkd[1005]: eth1: Gained IPv6LL Feb 12 19:41:06.375042 coreos-cloudinit[1117]: Detected an Ignition config. Exiting... Feb 12 19:41:06.375504 systemd[1]: Finished user-configdrive.service. Feb 12 19:41:06.376704 systemd[1]: Reached target user-config.target. Feb 12 19:41:06.406317 extend-filesystems[1087]: Resized partition /dev/vda9 Feb 12 19:41:06.407374 bash[1141]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:41:06.408622 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:41:06.429525 extend-filesystems[1144]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:41:06.437084 update_engine[1099]: I0212 19:41:06.436424 1099 main.cc:92] Flatcar Update Engine starting Feb 12 19:41:06.445024 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 12 19:41:06.445790 systemd[1]: Started update-engine.service. Feb 12 19:41:06.446146 update_engine[1099]: I0212 19:41:06.446110 1099 update_check_scheduler.cc:74] Next update check in 10m13s Feb 12 19:41:06.448905 systemd[1]: Started locksmithd.service. Feb 12 19:41:06.487613 systemd-logind[1098]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:41:06.491143 systemd-logind[1098]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:41:06.496494 systemd-logind[1098]: New seat seat0. Feb 12 19:41:06.511739 systemd[1]: Started systemd-logind.service. Feb 12 19:41:06.545030 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 12 19:41:06.568094 extend-filesystems[1144]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:41:06.568094 extend-filesystems[1144]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 12 19:41:06.568094 extend-filesystems[1144]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 12 19:41:06.571583 extend-filesystems[1087]: Resized filesystem in /dev/vda9 Feb 12 19:41:06.571583 extend-filesystems[1087]: Found vdb Feb 12 19:41:06.569766 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:41:06.570045 systemd[1]: Finished extend-filesystems.service. Feb 12 19:41:06.581741 tar[1112]: ./bandwidth Feb 12 19:41:06.584783 env[1116]: time="2024-02-12T19:41:06.584705833Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:41:06.606204 coreos-metadata[1082]: Feb 12 19:41:06.597 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:41:06.627116 coreos-metadata[1082]: Feb 12 19:41:06.624 INFO Fetch successful Feb 12 19:41:06.638474 unknown[1082]: wrote ssh authorized keys file for user: core Feb 12 19:41:06.658053 update-ssh-keys[1150]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:41:06.658393 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 19:41:06.708001 env[1116]: time="2024-02-12T19:41:06.706202518Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:41:06.708001 env[1116]: time="2024-02-12T19:41:06.707351752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:06.712195 tar[1112]: ./ptp Feb 12 19:41:06.718303 env[1116]: time="2024-02-12T19:41:06.718237891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:41:06.718303 env[1116]: time="2024-02-12T19:41:06.718288097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:06.718575 env[1116]: time="2024-02-12T19:41:06.718551973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:41:06.718575 env[1116]: time="2024-02-12T19:41:06.718573046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:06.718665 env[1116]: time="2024-02-12T19:41:06.718587137Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:41:06.718665 env[1116]: time="2024-02-12T19:41:06.718596741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:06.718780 env[1116]: time="2024-02-12T19:41:06.718751692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:06.721677 env[1116]: time="2024-02-12T19:41:06.721622745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:41:06.723115 env[1116]: time="2024-02-12T19:41:06.723055062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:41:06.723115 env[1116]: time="2024-02-12T19:41:06.723102759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:41:06.723268 env[1116]: time="2024-02-12T19:41:06.723209787Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:41:06.723268 env[1116]: time="2024-02-12T19:41:06.723227748Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:41:06.736590 env[1116]: time="2024-02-12T19:41:06.736527887Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:41:06.736590 env[1116]: time="2024-02-12T19:41:06.736584065Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:41:06.736590 env[1116]: time="2024-02-12T19:41:06.736599088Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736640065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736654422Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736666577Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736678819Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736694140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736710947Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736729723Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736745518Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.736828 env[1116]: time="2024-02-12T19:41:06.736758751Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:41:06.737050 env[1116]: time="2024-02-12T19:41:06.736939174Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:41:06.737271 env[1116]: time="2024-02-12T19:41:06.737245539Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:41:06.737715 env[1116]: time="2024-02-12T19:41:06.737689408Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:41:06.737791 env[1116]: time="2024-02-12T19:41:06.737738345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737791 env[1116]: time="2024-02-12T19:41:06.737753774Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:41:06.737848 env[1116]: time="2024-02-12T19:41:06.737805760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737848 env[1116]: time="2024-02-12T19:41:06.737818602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737848 env[1116]: time="2024-02-12T19:41:06.737831906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737913 env[1116]: time="2024-02-12T19:41:06.737850778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737913 env[1116]: time="2024-02-12T19:41:06.737868315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737913 env[1116]: time="2024-02-12T19:41:06.737885673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.737913 env[1116]: time="2024-02-12T19:41:06.737901760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.738018 env[1116]: time="2024-02-12T19:41:06.737960234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.738018 env[1116]: time="2024-02-12T19:41:06.738006612Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:41:06.738276 env[1116]: time="2024-02-12T19:41:06.738225655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.738276 env[1116]: time="2024-02-12T19:41:06.738257107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.738343 env[1116]: time="2024-02-12T19:41:06.738278577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.738343 env[1116]: time="2024-02-12T19:41:06.738296391Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:41:06.738343 env[1116]: time="2024-02-12T19:41:06.738313184Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:41:06.738343 env[1116]: time="2024-02-12T19:41:06.738325013Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:41:06.738451 env[1116]: time="2024-02-12T19:41:06.738351399Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:41:06.738451 env[1116]: time="2024-02-12T19:41:06.738389703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:41:06.738716 env[1116]: time="2024-02-12T19:41:06.738644514Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:41:06.742404 env[1116]: time="2024-02-12T19:41:06.738729019Z" level=info msg="Connect containerd service" Feb 12 19:41:06.742404 env[1116]: time="2024-02-12T19:41:06.738769496Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:41:06.742404 env[1116]: time="2024-02-12T19:41:06.741492410Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:41:06.742823 env[1116]: time="2024-02-12T19:41:06.742772158Z" level=info msg="Start subscribing containerd event" Feb 12 19:41:06.742871 env[1116]: time="2024-02-12T19:41:06.742847727Z" level=info msg="Start recovering state" Feb 12 19:41:06.742941 env[1116]: time="2024-02-12T19:41:06.742927027Z" level=info msg="Start event monitor" Feb 12 19:41:06.743005 env[1116]: time="2024-02-12T19:41:06.742942127Z" level=info msg="Start snapshots syncer" Feb 12 19:41:06.743005 env[1116]: time="2024-02-12T19:41:06.742955213Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:41:06.743005 env[1116]: time="2024-02-12T19:41:06.742966181Z" level=info msg="Start streaming server" Feb 12 19:41:06.745893 env[1116]: time="2024-02-12T19:41:06.745842625Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:41:06.746050 env[1116]: time="2024-02-12T19:41:06.745905704Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:41:06.746088 systemd[1]: Started containerd.service. Feb 12 19:41:06.747659 env[1116]: time="2024-02-12T19:41:06.747621835Z" level=info msg="containerd successfully booted in 0.189852s" Feb 12 19:41:06.801475 tar[1112]: ./vlan Feb 12 19:41:06.884991 tar[1112]: ./host-device Feb 12 19:41:06.963434 tar[1112]: ./tuning Feb 12 19:41:07.031811 tar[1112]: ./vrf Feb 12 19:41:07.037235 sshd_keygen[1109]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:41:07.086475 systemd[1]: Finished sshd-keygen.service. Feb 12 19:41:07.090094 systemd[1]: Starting issuegen.service... Feb 12 19:41:07.093213 systemd[1]: Started sshd@0-146.190.38.70:22-139.178.68.195:47712.service. Feb 12 19:41:07.116219 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:41:07.116500 systemd[1]: Finished issuegen.service. Feb 12 19:41:07.117074 tar[1112]: ./sbr Feb 12 19:41:07.120219 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:41:07.136685 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:41:07.140184 systemd[1]: Started getty@tty1.service. Feb 12 19:41:07.143107 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:41:07.144104 systemd[1]: Reached target getty.target. Feb 12 19:41:07.190261 systemd-networkd[1005]: eth0: Gained IPv6LL Feb 12 19:41:07.196821 sshd[1161]: Accepted publickey for core from 139.178.68.195 port 47712 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:07.202580 sshd[1161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:07.208586 tar[1112]: ./tap Feb 12 19:41:07.222239 systemd[1]: Created slice user-500.slice. Feb 12 19:41:07.224772 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:41:07.238313 systemd-logind[1098]: New session 1 of user core. Feb 12 19:41:07.244105 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:41:07.247544 systemd[1]: Starting user@500.service... Feb 12 19:41:07.252431 (systemd)[1170]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:07.341212 tar[1112]: ./dhcp Feb 12 19:41:07.404040 systemd[1170]: Queued start job for default target default.target. Feb 12 19:41:07.404629 systemd[1170]: Reached target paths.target. Feb 12 19:41:07.404655 systemd[1170]: Reached target sockets.target. Feb 12 19:41:07.404670 systemd[1170]: Reached target timers.target. Feb 12 19:41:07.404682 systemd[1170]: Reached target basic.target. Feb 12 19:41:07.404731 systemd[1170]: Reached target default.target. Feb 12 19:41:07.404783 systemd[1170]: Startup finished in 134ms. Feb 12 19:41:07.404906 systemd[1]: Started user@500.service. Feb 12 19:41:07.406478 systemd[1]: Started session-1.scope. Feb 12 19:41:07.478652 systemd[1]: Started sshd@1-146.190.38.70:22-139.178.68.195:39310.service. Feb 12 19:41:07.528618 systemd[1]: Finished prepare-critools.service. Feb 12 19:41:07.547126 tar[1112]: ./static Feb 12 19:41:07.563927 sshd[1183]: Accepted publickey for core from 139.178.68.195 port 39310 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:07.565873 sshd[1183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:07.573569 systemd[1]: Started session-2.scope. Feb 12 19:41:07.574676 systemd-logind[1098]: New session 2 of user core. Feb 12 19:41:07.590335 tar[1112]: ./firewall Feb 12 19:41:07.649673 tar[1112]: ./macvlan Feb 12 19:41:07.650670 locksmithd[1145]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:41:07.651681 sshd[1183]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:07.657808 systemd[1]: Started sshd@2-146.190.38.70:22-139.178.68.195:39316.service. Feb 12 19:41:07.659530 systemd[1]: sshd@1-146.190.38.70:22-139.178.68.195:39310.service: Deactivated successfully. Feb 12 19:41:07.660477 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:41:07.663490 systemd-logind[1098]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:41:07.667611 systemd-logind[1098]: Removed session 2. Feb 12 19:41:07.711594 tar[1112]: ./dummy Feb 12 19:41:07.715533 sshd[1190]: Accepted publickey for core from 139.178.68.195 port 39316 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:07.716329 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:07.722704 systemd-logind[1098]: New session 3 of user core. Feb 12 19:41:07.723269 systemd[1]: Started session-3.scope. Feb 12 19:41:07.762392 tar[1112]: ./bridge Feb 12 19:41:07.791622 sshd[1190]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:07.796225 systemd[1]: sshd@2-146.190.38.70:22-139.178.68.195:39316.service: Deactivated successfully. Feb 12 19:41:07.797071 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:41:07.800778 systemd-logind[1098]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:41:07.802682 systemd-logind[1098]: Removed session 3. Feb 12 19:41:07.826384 tar[1112]: ./ipvlan Feb 12 19:41:07.869910 tar[1112]: ./portmap Feb 12 19:41:07.909179 tar[1112]: ./host-local Feb 12 19:41:07.961531 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:41:07.962363 systemd[1]: Reached target multi-user.target. Feb 12 19:41:07.964619 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:41:07.977670 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:41:07.977969 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:41:07.986159 systemd[1]: Startup finished in 1.085s (kernel) + 6.676s (initrd) + 8.507s (userspace) = 16.270s. Feb 12 19:41:17.823907 systemd[1]: Started sshd@3-146.190.38.70:22-139.178.68.195:51544.service. Feb 12 19:41:17.870307 sshd[1200]: Accepted publickey for core from 139.178.68.195 port 51544 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:17.873362 sshd[1200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:17.880894 systemd-logind[1098]: New session 4 of user core. Feb 12 19:41:17.881628 systemd[1]: Started session-4.scope. Feb 12 19:41:17.952142 sshd[1200]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:17.957789 systemd[1]: sshd@3-146.190.38.70:22-139.178.68.195:51544.service: Deactivated successfully. Feb 12 19:41:17.958607 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:41:17.959615 systemd-logind[1098]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:41:17.961119 systemd[1]: Started sshd@4-146.190.38.70:22-139.178.68.195:51552.service. Feb 12 19:41:17.962914 systemd-logind[1098]: Removed session 4. Feb 12 19:41:18.009084 sshd[1206]: Accepted publickey for core from 139.178.68.195 port 51552 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:18.012359 sshd[1206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:18.021528 systemd[1]: Started session-5.scope. Feb 12 19:41:18.023149 systemd-logind[1098]: New session 5 of user core. Feb 12 19:41:18.087366 sshd[1206]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:18.095223 systemd[1]: sshd@4-146.190.38.70:22-139.178.68.195:51552.service: Deactivated successfully. Feb 12 19:41:18.096336 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:41:18.098110 systemd-logind[1098]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:41:18.099860 systemd[1]: Started sshd@5-146.190.38.70:22-139.178.68.195:51554.service. Feb 12 19:41:18.101853 systemd-logind[1098]: Removed session 5. Feb 12 19:41:18.148760 sshd[1212]: Accepted publickey for core from 139.178.68.195 port 51554 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:18.151671 sshd[1212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:18.158838 systemd-logind[1098]: New session 6 of user core. Feb 12 19:41:18.159387 systemd[1]: Started session-6.scope. Feb 12 19:41:18.230897 sshd[1212]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:18.237602 systemd[1]: sshd@5-146.190.38.70:22-139.178.68.195:51554.service: Deactivated successfully. Feb 12 19:41:18.238752 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:41:18.239539 systemd-logind[1098]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:41:18.241603 systemd[1]: Started sshd@6-146.190.38.70:22-139.178.68.195:51568.service. Feb 12 19:41:18.244389 systemd-logind[1098]: Removed session 6. Feb 12 19:41:18.294825 sshd[1218]: Accepted publickey for core from 139.178.68.195 port 51568 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:41:18.298106 sshd[1218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:41:18.307860 systemd-logind[1098]: New session 7 of user core. Feb 12 19:41:18.310190 systemd[1]: Started session-7.scope. Feb 12 19:41:18.397569 sudo[1221]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:41:18.398129 sudo[1221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:41:18.958118 systemd[1]: Reloading. Feb 12 19:41:19.070809 /usr/lib/systemd/system-generators/torcx-generator[1253]: time="2024-02-12T19:41:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:41:19.070842 /usr/lib/systemd/system-generators/torcx-generator[1253]: time="2024-02-12T19:41:19Z" level=info msg="torcx already run" Feb 12 19:41:19.192942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:41:19.192988 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:41:19.229367 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:41:19.331466 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:41:19.343159 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:41:19.344455 systemd[1]: Reached target network-online.target. Feb 12 19:41:19.347439 systemd[1]: Started kubelet.service. Feb 12 19:41:19.366956 systemd[1]: Starting coreos-metadata.service... Feb 12 19:41:19.417420 coreos-metadata[1305]: Feb 12 19:41:19.417 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:41:19.430077 coreos-metadata[1305]: Feb 12 19:41:19.429 INFO Fetch successful Feb 12 19:41:19.448642 kubelet[1297]: E0212 19:41:19.448013 1297 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 19:41:19.447712 systemd[1]: Finished coreos-metadata.service. Feb 12 19:41:19.455341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:41:19.455529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:41:19.842446 systemd[1]: Stopped kubelet.service. Feb 12 19:41:19.867502 systemd[1]: Reloading. Feb 12 19:41:19.965481 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2024-02-12T19:41:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:41:19.973153 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2024-02-12T19:41:19Z" level=info msg="torcx already run" Feb 12 19:41:20.094495 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:41:20.094531 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:41:20.124554 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:41:20.266839 systemd[1]: Started kubelet.service. Feb 12 19:41:20.330880 kubelet[1407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:41:20.330880 kubelet[1407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:41:20.330880 kubelet[1407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:41:20.331345 kubelet[1407]: I0212 19:41:20.331008 1407 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:41:21.074112 kubelet[1407]: I0212 19:41:21.074064 1407 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 19:41:21.074318 kubelet[1407]: I0212 19:41:21.074300 1407 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:41:21.074635 kubelet[1407]: I0212 19:41:21.074619 1407 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 19:41:21.077380 kubelet[1407]: I0212 19:41:21.077346 1407 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:41:21.080499 kubelet[1407]: I0212 19:41:21.080465 1407 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:41:21.080974 kubelet[1407]: I0212 19:41:21.080955 1407 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:41:21.081184 kubelet[1407]: I0212 19:41:21.081170 1407 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:41:21.081329 kubelet[1407]: I0212 19:41:21.081316 1407 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:41:21.081394 kubelet[1407]: I0212 19:41:21.081385 1407 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 19:41:21.081635 kubelet[1407]: I0212 19:41:21.081615 1407 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:41:21.087667 kubelet[1407]: I0212 19:41:21.087638 1407 kubelet.go:405] "Attempting to sync node with API server" Feb 12 19:41:21.087869 kubelet[1407]: I0212 19:41:21.087857 1407 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:41:21.087954 kubelet[1407]: I0212 19:41:21.087943 1407 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:41:21.088064 kubelet[1407]: I0212 19:41:21.088054 1407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:41:21.088802 kubelet[1407]: E0212 19:41:21.088769 1407 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:21.089962 kubelet[1407]: E0212 19:41:21.089938 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:21.090557 kubelet[1407]: I0212 19:41:21.090501 1407 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:41:21.096433 kubelet[1407]: W0212 19:41:21.090920 1407 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:41:21.096433 kubelet[1407]: I0212 19:41:21.091520 1407 server.go:1168] "Started kubelet" Feb 12 19:41:21.098018 kubelet[1407]: E0212 19:41:21.097974 1407 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:41:21.098200 kubelet[1407]: E0212 19:41:21.098185 1407 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:41:21.098701 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:41:21.099024 kubelet[1407]: I0212 19:41:21.099000 1407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:41:21.102004 kubelet[1407]: I0212 19:41:21.101907 1407 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:41:21.103555 kubelet[1407]: I0212 19:41:21.103511 1407 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:41:21.105084 kubelet[1407]: I0212 19:41:21.104971 1407 server.go:461] "Adding debug handlers to kubelet server" Feb 12 19:41:21.109730 kubelet[1407]: I0212 19:41:21.109682 1407 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 19:41:21.122877 kubelet[1407]: I0212 19:41:21.122191 1407 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 19:41:21.123483 kubelet[1407]: E0212 19:41:21.123356 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee22a29c5a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 91492954, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 91492954, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.124052 kubelet[1407]: W0212 19:41:21.123976 1407 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "146.190.38.70" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:21.124272 kubelet[1407]: E0212 19:41:21.124258 1407 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "146.190.38.70" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:41:21.124410 kubelet[1407]: W0212 19:41:21.124393 1407 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:21.124494 kubelet[1407]: E0212 19:41:21.124484 1407 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:41:21.124665 kubelet[1407]: E0212 19:41:21.124646 1407 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.38.70\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 19:41:21.144524 kubelet[1407]: W0212 19:41:21.144483 1407 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:21.144524 kubelet[1407]: E0212 19:41:21.144523 1407 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:41:21.144723 kubelet[1407]: E0212 19:41:21.144579 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee23087937", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 98168631, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 98168631, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.151307 kubelet[1407]: I0212 19:41:21.151279 1407 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:41:21.151307 kubelet[1407]: I0212 19:41:21.151297 1407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:41:21.151307 kubelet[1407]: I0212 19:41:21.151316 1407 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:41:21.152572 kubelet[1407]: E0212 19:41:21.152493 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee2627159f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 146.190.38.70 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150506399, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150506399, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.155125 kubelet[1407]: E0212 19:41:21.155011 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee262733eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 146.190.38.70 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150514155, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150514155, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.158321 kubelet[1407]: I0212 19:41:21.158247 1407 policy_none.go:49] "None policy: Start" Feb 12 19:41:21.158788 kubelet[1407]: E0212 19:41:21.158440 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee26274511", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 146.190.38.70 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150518545, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150518545, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.159515 kubelet[1407]: I0212 19:41:21.159459 1407 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:41:21.159515 kubelet[1407]: I0212 19:41:21.159498 1407 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:41:21.167324 systemd[1]: Created slice kubepods.slice. Feb 12 19:41:21.173075 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:41:21.182192 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:41:21.183799 kubelet[1407]: I0212 19:41:21.183770 1407 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:41:21.184105 kubelet[1407]: I0212 19:41:21.184080 1407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:41:21.186780 kubelet[1407]: E0212 19:41:21.186754 1407 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"146.190.38.70\" not found" Feb 12 19:41:21.194011 kubelet[1407]: E0212 19:41:21.193874 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee288e17f5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 190811637, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 190811637, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.215364 kubelet[1407]: I0212 19:41:21.215328 1407 kubelet_node_status.go:70] "Attempting to register node" node="146.190.38.70" Feb 12 19:41:21.220925 kubelet[1407]: E0212 19:41:21.220703 1407 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="146.190.38.70" Feb 12 19:41:21.221139 kubelet[1407]: E0212 19:41:21.221028 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee2627159f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 146.190.38.70 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150506399, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 215276503, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee2627159f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.226417 kubelet[1407]: E0212 19:41:21.226285 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee262733eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 146.190.38.70 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150514155, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 215284541, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee262733eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.228446 kubelet[1407]: E0212 19:41:21.228326 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee26274511", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 146.190.38.70 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150518545, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 215288930, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee26274511" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.258886 kubelet[1407]: I0212 19:41:21.258852 1407 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:41:21.260936 kubelet[1407]: I0212 19:41:21.260904 1407 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:41:21.261137 kubelet[1407]: I0212 19:41:21.261126 1407 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 19:41:21.261449 kubelet[1407]: I0212 19:41:21.261430 1407 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 19:41:21.261618 kubelet[1407]: E0212 19:41:21.261606 1407 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:41:21.264117 kubelet[1407]: W0212 19:41:21.264091 1407 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:21.264117 kubelet[1407]: E0212 19:41:21.264121 1407 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:41:21.326918 kubelet[1407]: E0212 19:41:21.326785 1407 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.38.70\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 19:41:21.422654 kubelet[1407]: I0212 19:41:21.422615 1407 kubelet_node_status.go:70] "Attempting to register node" node="146.190.38.70" Feb 12 19:41:21.424831 kubelet[1407]: E0212 19:41:21.424797 1407 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="146.190.38.70" Feb 12 19:41:21.425401 kubelet[1407]: E0212 19:41:21.425262 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee2627159f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 146.190.38.70 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150506399, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 422534831, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee2627159f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.427743 kubelet[1407]: E0212 19:41:21.427649 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee262733eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 146.190.38.70 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150514155, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 422550824, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee262733eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.430544 kubelet[1407]: E0212 19:41:21.430382 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee26274511", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 146.190.38.70 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150518545, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 422555446, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee26274511" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.729829 kubelet[1407]: E0212 19:41:21.729700 1407 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.38.70\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 19:41:21.827195 kubelet[1407]: I0212 19:41:21.826617 1407 kubelet_node_status.go:70] "Attempting to register node" node="146.190.38.70" Feb 12 19:41:21.828764 kubelet[1407]: E0212 19:41:21.828705 1407 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="146.190.38.70" Feb 12 19:41:21.828972 kubelet[1407]: E0212 19:41:21.828779 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee2627159f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 146.190.38.70 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150506399, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 826564488, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee2627159f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.831181 kubelet[1407]: E0212 19:41:21.831055 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee262733eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 146.190.38.70 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150514155, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 826577163, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee262733eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:21.832546 kubelet[1407]: E0212 19:41:21.832403 1407 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"146.190.38.70.17b334ee26274511", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"146.190.38.70", UID:"146.190.38.70", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 146.190.38.70 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"146.190.38.70"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 150518545, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 41, 21, 826581328, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "146.190.38.70.17b334ee26274511" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:41:22.076947 kubelet[1407]: I0212 19:41:22.076874 1407 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:41:22.090293 kubelet[1407]: E0212 19:41:22.090245 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:22.487864 kubelet[1407]: E0212 19:41:22.487690 1407 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "146.190.38.70" not found Feb 12 19:41:22.537899 kubelet[1407]: E0212 19:41:22.537840 1407 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"146.190.38.70\" not found" node="146.190.38.70" Feb 12 19:41:22.630625 kubelet[1407]: I0212 19:41:22.630585 1407 kubelet_node_status.go:70] "Attempting to register node" node="146.190.38.70" Feb 12 19:41:22.644512 kubelet[1407]: I0212 19:41:22.644463 1407 kubelet_node_status.go:73] "Successfully registered node" node="146.190.38.70" Feb 12 19:41:22.664114 kubelet[1407]: I0212 19:41:22.664074 1407 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:41:22.664599 env[1116]: time="2024-02-12T19:41:22.664541780Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:41:22.665131 kubelet[1407]: I0212 19:41:22.664813 1407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:41:22.788589 sudo[1221]: pam_unix(sudo:session): session closed for user root Feb 12 19:41:22.793336 sshd[1218]: pam_unix(sshd:session): session closed for user core Feb 12 19:41:22.796835 systemd[1]: sshd@6-146.190.38.70:22-139.178.68.195:51568.service: Deactivated successfully. Feb 12 19:41:22.797744 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:41:22.798519 systemd-logind[1098]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:41:22.800184 systemd-logind[1098]: Removed session 7. Feb 12 19:41:23.090814 kubelet[1407]: E0212 19:41:23.090761 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:23.091158 kubelet[1407]: I0212 19:41:23.090847 1407 apiserver.go:52] "Watching apiserver" Feb 12 19:41:23.094807 kubelet[1407]: I0212 19:41:23.094765 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:41:23.095131 kubelet[1407]: I0212 19:41:23.095112 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:41:23.101175 systemd[1]: Created slice kubepods-burstable-pode16adb87_01a6_4c54_aad5_72939bdd5902.slice. Feb 12 19:41:23.123539 systemd[1]: Created slice kubepods-besteffort-pod3230fa70_a2c4_47ac_8836_e8f55c9d315f.slice. Feb 12 19:41:23.124506 kubelet[1407]: I0212 19:41:23.124483 1407 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 19:41:23.134283 kubelet[1407]: I0212 19:41:23.134232 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-bpf-maps\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134283 kubelet[1407]: I0212 19:41:23.134280 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e16adb87-01a6-4c54-aad5-72939bdd5902-clustermesh-secrets\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134567 kubelet[1407]: I0212 19:41:23.134319 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-hubble-tls\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134567 kubelet[1407]: I0212 19:41:23.134352 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg456\" (UniqueName: \"kubernetes.io/projected/3230fa70-a2c4-47ac-8836-e8f55c9d315f-kube-api-access-dg456\") pod \"kube-proxy-gqvk4\" (UID: \"3230fa70-a2c4-47ac-8836-e8f55c9d315f\") " pod="kube-system/kube-proxy-gqvk4" Feb 12 19:41:23.134567 kubelet[1407]: I0212 19:41:23.134394 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-run\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134567 kubelet[1407]: I0212 19:41:23.134414 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-cgroup\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134567 kubelet[1407]: I0212 19:41:23.134432 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cni-path\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134567 kubelet[1407]: I0212 19:41:23.134465 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-net\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134857 kubelet[1407]: I0212 19:41:23.134497 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3230fa70-a2c4-47ac-8836-e8f55c9d315f-lib-modules\") pod \"kube-proxy-gqvk4\" (UID: \"3230fa70-a2c4-47ac-8836-e8f55c9d315f\") " pod="kube-system/kube-proxy-gqvk4" Feb 12 19:41:23.134857 kubelet[1407]: I0212 19:41:23.134551 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-lib-modules\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134857 kubelet[1407]: I0212 19:41:23.134573 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-config-path\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134857 kubelet[1407]: I0212 19:41:23.134592 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-kernel\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.134857 kubelet[1407]: I0212 19:41:23.134621 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3230fa70-a2c4-47ac-8836-e8f55c9d315f-xtables-lock\") pod \"kube-proxy-gqvk4\" (UID: \"3230fa70-a2c4-47ac-8836-e8f55c9d315f\") " pod="kube-system/kube-proxy-gqvk4" Feb 12 19:41:23.134857 kubelet[1407]: I0212 19:41:23.134661 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-hostproc\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.135260 kubelet[1407]: I0212 19:41:23.134693 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-etc-cni-netd\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.135260 kubelet[1407]: I0212 19:41:23.134711 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-xtables-lock\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.135260 kubelet[1407]: I0212 19:41:23.134732 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45zzs\" (UniqueName: \"kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-kube-api-access-45zzs\") pod \"cilium-kmgjp\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " pod="kube-system/cilium-kmgjp" Feb 12 19:41:23.135260 kubelet[1407]: I0212 19:41:23.134761 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3230fa70-a2c4-47ac-8836-e8f55c9d315f-kube-proxy\") pod \"kube-proxy-gqvk4\" (UID: \"3230fa70-a2c4-47ac-8836-e8f55c9d315f\") " pod="kube-system/kube-proxy-gqvk4" Feb 12 19:41:23.135260 kubelet[1407]: I0212 19:41:23.134770 1407 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:41:23.423365 kubelet[1407]: E0212 19:41:23.422304 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:23.424046 env[1116]: time="2024-02-12T19:41:23.423878735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmgjp,Uid:e16adb87-01a6-4c54-aad5-72939bdd5902,Namespace:kube-system,Attempt:0,}" Feb 12 19:41:23.434143 kubelet[1407]: E0212 19:41:23.434053 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:23.435535 env[1116]: time="2024-02-12T19:41:23.435370131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqvk4,Uid:3230fa70-a2c4-47ac-8836-e8f55c9d315f,Namespace:kube-system,Attempt:0,}" Feb 12 19:41:24.048623 env[1116]: time="2024-02-12T19:41:24.048556877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.050710 env[1116]: time="2024-02-12T19:41:24.050615611Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.054353 env[1116]: time="2024-02-12T19:41:24.054300352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.056576 env[1116]: time="2024-02-12T19:41:24.056513200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.057627 env[1116]: time="2024-02-12T19:41:24.057580010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.065388 env[1116]: time="2024-02-12T19:41:24.065309247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.066776 env[1116]: time="2024-02-12T19:41:24.066727527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.067646 env[1116]: time="2024-02-12T19:41:24.067609294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:24.092037 kubelet[1407]: E0212 19:41:24.091961 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:24.105490 env[1116]: time="2024-02-12T19:41:24.105379091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:41:24.105490 env[1116]: time="2024-02-12T19:41:24.105428041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:41:24.105490 env[1116]: time="2024-02-12T19:41:24.105442115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:41:24.105996 env[1116]: time="2024-02-12T19:41:24.105930758Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf pid=1468 runtime=io.containerd.runc.v2 Feb 12 19:41:24.107684 env[1116]: time="2024-02-12T19:41:24.107580153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:41:24.107794 env[1116]: time="2024-02-12T19:41:24.107713443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:41:24.107794 env[1116]: time="2024-02-12T19:41:24.107748112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:41:24.108097 env[1116]: time="2024-02-12T19:41:24.108038005Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6cecaebb0b2af41c25ac8afd98402d11d9d8bbc4a1e8836c27ef5d3804fb515 pid=1467 runtime=io.containerd.runc.v2 Feb 12 19:41:24.126902 systemd[1]: Started cri-containerd-181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf.scope. Feb 12 19:41:24.149677 systemd[1]: Started cri-containerd-f6cecaebb0b2af41c25ac8afd98402d11d9d8bbc4a1e8836c27ef5d3804fb515.scope. Feb 12 19:41:24.194709 env[1116]: time="2024-02-12T19:41:24.194662128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmgjp,Uid:e16adb87-01a6-4c54-aad5-72939bdd5902,Namespace:kube-system,Attempt:0,} returns sandbox id \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\"" Feb 12 19:41:24.198585 kubelet[1407]: E0212 19:41:24.198503 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:24.200025 env[1116]: time="2024-02-12T19:41:24.199958446Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:41:24.219311 env[1116]: time="2024-02-12T19:41:24.219251660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqvk4,Uid:3230fa70-a2c4-47ac-8836-e8f55c9d315f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6cecaebb0b2af41c25ac8afd98402d11d9d8bbc4a1e8836c27ef5d3804fb515\"" Feb 12 19:41:24.220648 kubelet[1407]: E0212 19:41:24.220620 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:24.245959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117248641.mount: Deactivated successfully. Feb 12 19:41:25.092785 kubelet[1407]: E0212 19:41:25.092727 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:26.094467 kubelet[1407]: E0212 19:41:26.094417 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:27.095280 kubelet[1407]: E0212 19:41:27.095231 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:28.096381 kubelet[1407]: E0212 19:41:28.096304 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:29.097508 kubelet[1407]: E0212 19:41:29.097458 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:29.981293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277504600.mount: Deactivated successfully. Feb 12 19:41:30.098246 kubelet[1407]: E0212 19:41:30.098185 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:31.098963 kubelet[1407]: E0212 19:41:31.098880 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:32.100007 kubelet[1407]: E0212 19:41:32.099950 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:33.100968 kubelet[1407]: E0212 19:41:33.100919 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:33.900057 env[1116]: time="2024-02-12T19:41:33.899928041Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:33.902712 env[1116]: time="2024-02-12T19:41:33.902651775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:33.904952 env[1116]: time="2024-02-12T19:41:33.904896281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:33.906220 env[1116]: time="2024-02-12T19:41:33.906152046Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:41:33.907813 env[1116]: time="2024-02-12T19:41:33.907753648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 19:41:33.910749 env[1116]: time="2024-02-12T19:41:33.910692426Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:41:33.925070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614602127.mount: Deactivated successfully. Feb 12 19:41:33.933440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169415370.mount: Deactivated successfully. Feb 12 19:41:33.941255 env[1116]: time="2024-02-12T19:41:33.941178192Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\"" Feb 12 19:41:33.942743 env[1116]: time="2024-02-12T19:41:33.942693206Z" level=info msg="StartContainer for \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\"" Feb 12 19:41:33.980433 systemd[1]: Started cri-containerd-8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694.scope. Feb 12 19:41:34.043945 env[1116]: time="2024-02-12T19:41:34.043862662Z" level=info msg="StartContainer for \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\" returns successfully" Feb 12 19:41:34.059664 systemd[1]: cri-containerd-8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694.scope: Deactivated successfully. Feb 12 19:41:34.101200 kubelet[1407]: E0212 19:41:34.101132 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:34.176355 env[1116]: time="2024-02-12T19:41:34.175553320Z" level=info msg="shim disconnected" id=8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694 Feb 12 19:41:34.176355 env[1116]: time="2024-02-12T19:41:34.175642695Z" level=warning msg="cleaning up after shim disconnected" id=8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694 namespace=k8s.io Feb 12 19:41:34.176355 env[1116]: time="2024-02-12T19:41:34.175658697Z" level=info msg="cleaning up dead shim" Feb 12 19:41:34.189541 env[1116]: time="2024-02-12T19:41:34.189489627Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:41:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1588 runtime=io.containerd.runc.v2\n" Feb 12 19:41:34.323519 kubelet[1407]: E0212 19:41:34.322859 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:34.326240 env[1116]: time="2024-02-12T19:41:34.325880604Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:41:34.359863 env[1116]: time="2024-02-12T19:41:34.359802454Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\"" Feb 12 19:41:34.360855 env[1116]: time="2024-02-12T19:41:34.360815329Z" level=info msg="StartContainer for \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\"" Feb 12 19:41:34.398316 systemd[1]: Started cri-containerd-c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef.scope. Feb 12 19:41:34.468782 env[1116]: time="2024-02-12T19:41:34.467872721Z" level=info msg="StartContainer for \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\" returns successfully" Feb 12 19:41:34.485321 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:41:34.485581 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:41:34.485951 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:41:34.490331 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:41:34.501044 systemd[1]: cri-containerd-c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef.scope: Deactivated successfully. Feb 12 19:41:34.509776 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:41:34.581202 env[1116]: time="2024-02-12T19:41:34.581133114Z" level=info msg="shim disconnected" id=c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef Feb 12 19:41:34.581202 env[1116]: time="2024-02-12T19:41:34.581199063Z" level=warning msg="cleaning up after shim disconnected" id=c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef namespace=k8s.io Feb 12 19:41:34.581202 env[1116]: time="2024-02-12T19:41:34.581214792Z" level=info msg="cleaning up dead shim" Feb 12 19:41:34.593594 env[1116]: time="2024-02-12T19:41:34.593465000Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:41:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1653 runtime=io.containerd.runc.v2\n" Feb 12 19:41:34.923600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694-rootfs.mount: Deactivated successfully. Feb 12 19:41:35.102099 kubelet[1407]: E0212 19:41:35.102036 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:35.153592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435545353.mount: Deactivated successfully. Feb 12 19:41:35.327359 kubelet[1407]: E0212 19:41:35.327316 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:35.329970 env[1116]: time="2024-02-12T19:41:35.329910548Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:41:35.352480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184003863.mount: Deactivated successfully. Feb 12 19:41:35.368273 env[1116]: time="2024-02-12T19:41:35.368173833Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\"" Feb 12 19:41:35.369735 env[1116]: time="2024-02-12T19:41:35.369663277Z" level=info msg="StartContainer for \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\"" Feb 12 19:41:35.416596 systemd[1]: Started cri-containerd-774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6.scope. Feb 12 19:41:35.490935 systemd[1]: cri-containerd-774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6.scope: Deactivated successfully. Feb 12 19:41:35.493313 env[1116]: time="2024-02-12T19:41:35.493251231Z" level=info msg="StartContainer for \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\" returns successfully" Feb 12 19:41:35.594545 env[1116]: time="2024-02-12T19:41:35.593993010Z" level=info msg="shim disconnected" id=774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6 Feb 12 19:41:35.594863 env[1116]: time="2024-02-12T19:41:35.594828521Z" level=warning msg="cleaning up after shim disconnected" id=774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6 namespace=k8s.io Feb 12 19:41:35.594997 env[1116]: time="2024-02-12T19:41:35.594966019Z" level=info msg="cleaning up dead shim" Feb 12 19:41:35.611053 env[1116]: time="2024-02-12T19:41:35.610996847Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:41:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1714 runtime=io.containerd.runc.v2\n" Feb 12 19:41:35.921737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174945103.mount: Deactivated successfully. Feb 12 19:41:35.984245 env[1116]: time="2024-02-12T19:41:35.984163376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:35.987457 env[1116]: time="2024-02-12T19:41:35.987389344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:35.990057 env[1116]: time="2024-02-12T19:41:35.989946841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:35.992790 env[1116]: time="2024-02-12T19:41:35.992673445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:35.994748 env[1116]: time="2024-02-12T19:41:35.993746993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 19:41:35.996942 env[1116]: time="2024-02-12T19:41:35.996897119Z" level=info msg="CreateContainer within sandbox \"f6cecaebb0b2af41c25ac8afd98402d11d9d8bbc4a1e8836c27ef5d3804fb515\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:41:36.016331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518639224.mount: Deactivated successfully. Feb 12 19:41:36.022727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105926578.mount: Deactivated successfully. Feb 12 19:41:36.029659 env[1116]: time="2024-02-12T19:41:36.029557601Z" level=info msg="CreateContainer within sandbox \"f6cecaebb0b2af41c25ac8afd98402d11d9d8bbc4a1e8836c27ef5d3804fb515\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dbecb64fe449a361770f171115d553059cacadf084535886cf8291400c26bce8\"" Feb 12 19:41:36.030768 env[1116]: time="2024-02-12T19:41:36.030707953Z" level=info msg="StartContainer for \"dbecb64fe449a361770f171115d553059cacadf084535886cf8291400c26bce8\"" Feb 12 19:41:36.059739 systemd[1]: Started cri-containerd-dbecb64fe449a361770f171115d553059cacadf084535886cf8291400c26bce8.scope. Feb 12 19:41:36.102496 kubelet[1407]: E0212 19:41:36.102449 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:36.118857 env[1116]: time="2024-02-12T19:41:36.118797193Z" level=info msg="StartContainer for \"dbecb64fe449a361770f171115d553059cacadf084535886cf8291400c26bce8\" returns successfully" Feb 12 19:41:36.331723 kubelet[1407]: E0212 19:41:36.331228 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:36.335506 kubelet[1407]: E0212 19:41:36.334656 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:36.344341 env[1116]: time="2024-02-12T19:41:36.344287006Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:41:36.363867 kubelet[1407]: I0212 19:41:36.363813 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gqvk4" podStartSLOduration=2.590335236 podCreationTimestamp="2024-02-12 19:41:22 +0000 UTC" firstStartedPulling="2024-02-12 19:41:24.221775653 +0000 UTC m=+3.950741535" lastFinishedPulling="2024-02-12 19:41:35.995164039 +0000 UTC m=+15.724129961" observedRunningTime="2024-02-12 19:41:36.363554708 +0000 UTC m=+16.092520607" watchObservedRunningTime="2024-02-12 19:41:36.363723662 +0000 UTC m=+16.092689562" Feb 12 19:41:36.383965 env[1116]: time="2024-02-12T19:41:36.383896102Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\"" Feb 12 19:41:36.386421 env[1116]: time="2024-02-12T19:41:36.386356246Z" level=info msg="StartContainer for \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\"" Feb 12 19:41:36.418149 systemd[1]: Started cri-containerd-efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0.scope. Feb 12 19:41:36.494101 systemd[1]: cri-containerd-efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0.scope: Deactivated successfully. Feb 12 19:41:36.497112 env[1116]: time="2024-02-12T19:41:36.496836489Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode16adb87_01a6_4c54_aad5_72939bdd5902.slice/cri-containerd-efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0.scope/memory.events\": no such file or directory" Feb 12 19:41:36.501718 env[1116]: time="2024-02-12T19:41:36.501644603Z" level=info msg="StartContainer for \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\" returns successfully" Feb 12 19:41:36.572206 env[1116]: time="2024-02-12T19:41:36.572140832Z" level=info msg="shim disconnected" id=efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0 Feb 12 19:41:36.572206 env[1116]: time="2024-02-12T19:41:36.572210115Z" level=warning msg="cleaning up after shim disconnected" id=efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0 namespace=k8s.io Feb 12 19:41:36.572585 env[1116]: time="2024-02-12T19:41:36.572226951Z" level=info msg="cleaning up dead shim" Feb 12 19:41:36.588528 env[1116]: time="2024-02-12T19:41:36.588384379Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1913 runtime=io.containerd.runc.v2\n" Feb 12 19:41:37.102804 kubelet[1407]: E0212 19:41:37.102666 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:37.344381 kubelet[1407]: E0212 19:41:37.343589 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:37.344381 kubelet[1407]: E0212 19:41:37.343968 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:37.348095 env[1116]: time="2024-02-12T19:41:37.348028029Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:41:37.381284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542708521.mount: Deactivated successfully. Feb 12 19:41:37.393593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609635827.mount: Deactivated successfully. Feb 12 19:41:37.406832 env[1116]: time="2024-02-12T19:41:37.406230573Z" level=info msg="CreateContainer within sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\"" Feb 12 19:41:37.408185 env[1116]: time="2024-02-12T19:41:37.408108520Z" level=info msg="StartContainer for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\"" Feb 12 19:41:37.440666 systemd[1]: Started cri-containerd-2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0.scope. Feb 12 19:41:37.544110 env[1116]: time="2024-02-12T19:41:37.543942389Z" level=info msg="StartContainer for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" returns successfully" Feb 12 19:41:37.729240 kubelet[1407]: I0212 19:41:37.729102 1407 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:41:38.103813 kubelet[1407]: E0212 19:41:38.103759 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:38.133013 kernel: Initializing XFRM netlink socket Feb 12 19:41:38.353263 kubelet[1407]: E0212 19:41:38.353042 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:38.384499 kubelet[1407]: I0212 19:41:38.383955 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kmgjp" podStartSLOduration=6.676242871 podCreationTimestamp="2024-02-12 19:41:22 +0000 UTC" firstStartedPulling="2024-02-12 19:41:24.199238378 +0000 UTC m=+3.928204258" lastFinishedPulling="2024-02-12 19:41:33.90680242 +0000 UTC m=+13.635768325" observedRunningTime="2024-02-12 19:41:38.383537431 +0000 UTC m=+18.112503342" watchObservedRunningTime="2024-02-12 19:41:38.383806938 +0000 UTC m=+18.112772835" Feb 12 19:41:39.105513 kubelet[1407]: E0212 19:41:39.105442 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:39.356682 kubelet[1407]: E0212 19:41:39.356274 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:39.919710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:41:39.919886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:41:39.921109 systemd-networkd[1005]: cilium_host: Link UP Feb 12 19:41:39.921327 systemd-networkd[1005]: cilium_net: Link UP Feb 12 19:41:39.921611 systemd-networkd[1005]: cilium_net: Gained carrier Feb 12 19:41:39.921915 systemd-networkd[1005]: cilium_host: Gained carrier Feb 12 19:41:40.046347 systemd-networkd[1005]: cilium_net: Gained IPv6LL Feb 12 19:41:40.106041 kubelet[1407]: E0212 19:41:40.105967 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:40.136251 systemd-networkd[1005]: cilium_vxlan: Link UP Feb 12 19:41:40.136270 systemd-networkd[1005]: cilium_vxlan: Gained carrier Feb 12 19:41:40.142596 systemd-networkd[1005]: cilium_host: Gained IPv6LL Feb 12 19:41:40.357538 kubelet[1407]: E0212 19:41:40.357492 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:40.482240 kubelet[1407]: I0212 19:41:40.482187 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:41:40.490472 systemd[1]: Created slice kubepods-besteffort-pod66e1cc67_16ee_491b_9aae_4679ef3696f3.slice. Feb 12 19:41:40.525163 kernel: NET: Registered PF_ALG protocol family Feb 12 19:41:40.574146 kubelet[1407]: I0212 19:41:40.574003 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm7vz\" (UniqueName: \"kubernetes.io/projected/66e1cc67-16ee-491b-9aae-4679ef3696f3-kube-api-access-xm7vz\") pod \"nginx-deployment-845c78c8b9-jsp9b\" (UID: \"66e1cc67-16ee-491b-9aae-4679ef3696f3\") " pod="default/nginx-deployment-845c78c8b9-jsp9b" Feb 12 19:41:40.812056 env[1116]: time="2024-02-12T19:41:40.811919143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-jsp9b,Uid:66e1cc67-16ee-491b-9aae-4679ef3696f3,Namespace:default,Attempt:0,}" Feb 12 19:41:41.088653 kubelet[1407]: E0212 19:41:41.088388 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:41.107154 kubelet[1407]: E0212 19:41:41.106847 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:41.626163 systemd-networkd[1005]: cilium_vxlan: Gained IPv6LL Feb 12 19:41:41.722974 systemd-networkd[1005]: lxc_health: Link UP Feb 12 19:41:41.733130 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:41:41.732335 systemd-networkd[1005]: lxc_health: Gained carrier Feb 12 19:41:42.107574 kubelet[1407]: E0212 19:41:42.107525 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:42.385843 systemd-networkd[1005]: lxc2b0d2fb2a5fa: Link UP Feb 12 19:41:42.395048 kernel: eth0: renamed from tmpffbdc Feb 12 19:41:42.405095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2b0d2fb2a5fa: link becomes ready Feb 12 19:41:42.405839 systemd-networkd[1005]: lxc2b0d2fb2a5fa: Gained carrier Feb 12 19:41:43.108855 kubelet[1407]: E0212 19:41:43.108769 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:43.222257 systemd-networkd[1005]: lxc_health: Gained IPv6LL Feb 12 19:41:43.426022 kubelet[1407]: E0212 19:41:43.425487 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:43.734217 systemd-networkd[1005]: lxc2b0d2fb2a5fa: Gained IPv6LL Feb 12 19:41:44.110057 kubelet[1407]: E0212 19:41:44.109965 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:45.111269 kubelet[1407]: E0212 19:41:45.111218 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:46.112084 kubelet[1407]: E0212 19:41:46.111963 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:47.112448 kubelet[1407]: E0212 19:41:47.112352 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:47.858017 env[1116]: time="2024-02-12T19:41:47.857276715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:41:47.858017 env[1116]: time="2024-02-12T19:41:47.857427465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:41:47.858017 env[1116]: time="2024-02-12T19:41:47.857466694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:41:47.858017 env[1116]: time="2024-02-12T19:41:47.857694395Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffbdc14701ca4aa6596c4379305910c8d7da8acfa2408d499b4052f9b04e8370 pid=2458 runtime=io.containerd.runc.v2 Feb 12 19:41:47.883857 systemd[1]: run-containerd-runc-k8s.io-ffbdc14701ca4aa6596c4379305910c8d7da8acfa2408d499b4052f9b04e8370-runc.vjzv5P.mount: Deactivated successfully. Feb 12 19:41:47.892836 systemd[1]: Started cri-containerd-ffbdc14701ca4aa6596c4379305910c8d7da8acfa2408d499b4052f9b04e8370.scope. Feb 12 19:41:47.960136 env[1116]: time="2024-02-12T19:41:47.960068399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-jsp9b,Uid:66e1cc67-16ee-491b-9aae-4679ef3696f3,Namespace:default,Attempt:0,} returns sandbox id \"ffbdc14701ca4aa6596c4379305910c8d7da8acfa2408d499b4052f9b04e8370\"" Feb 12 19:41:47.962290 env[1116]: time="2024-02-12T19:41:47.962240228Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:41:48.113858 kubelet[1407]: E0212 19:41:48.112899 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:49.114164 kubelet[1407]: E0212 19:41:49.114093 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:50.114274 kubelet[1407]: E0212 19:41:50.114230 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:50.571478 kubelet[1407]: I0212 19:41:50.571430 1407 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:41:50.572175 kubelet[1407]: E0212 19:41:50.572144 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:51.114655 kubelet[1407]: E0212 19:41:51.114600 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:51.388491 kubelet[1407]: E0212 19:41:51.387966 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:41:51.436068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158220473.mount: Deactivated successfully. Feb 12 19:41:51.721536 update_engine[1099]: I0212 19:41:51.720309 1099 update_attempter.cc:509] Updating boot flags... Feb 12 19:41:52.115095 kubelet[1407]: E0212 19:41:52.115023 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:52.613648 env[1116]: time="2024-02-12T19:41:52.613573235Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:52.615420 env[1116]: time="2024-02-12T19:41:52.615374143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:52.617185 env[1116]: time="2024-02-12T19:41:52.617135947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:52.619168 env[1116]: time="2024-02-12T19:41:52.619115312Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:41:52.620261 env[1116]: time="2024-02-12T19:41:52.620225145Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:41:52.623071 env[1116]: time="2024-02-12T19:41:52.623024160Z" level=info msg="CreateContainer within sandbox \"ffbdc14701ca4aa6596c4379305910c8d7da8acfa2408d499b4052f9b04e8370\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:41:52.636735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228144667.mount: Deactivated successfully. Feb 12 19:41:52.643366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185423258.mount: Deactivated successfully. Feb 12 19:41:52.649707 env[1116]: time="2024-02-12T19:41:52.649644847Z" level=info msg="CreateContainer within sandbox \"ffbdc14701ca4aa6596c4379305910c8d7da8acfa2408d499b4052f9b04e8370\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"194f580acb28a4fc724e53dbd7d445117361cf96820323f592a5236984b86a5a\"" Feb 12 19:41:52.651257 env[1116]: time="2024-02-12T19:41:52.651147112Z" level=info msg="StartContainer for \"194f580acb28a4fc724e53dbd7d445117361cf96820323f592a5236984b86a5a\"" Feb 12 19:41:52.675903 systemd[1]: Started cri-containerd-194f580acb28a4fc724e53dbd7d445117361cf96820323f592a5236984b86a5a.scope. Feb 12 19:41:52.721772 env[1116]: time="2024-02-12T19:41:52.721717344Z" level=info msg="StartContainer for \"194f580acb28a4fc724e53dbd7d445117361cf96820323f592a5236984b86a5a\" returns successfully" Feb 12 19:41:53.116167 kubelet[1407]: E0212 19:41:53.116102 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:53.412695 kubelet[1407]: I0212 19:41:53.412265 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-jsp9b" podStartSLOduration=8.753192185 podCreationTimestamp="2024-02-12 19:41:40 +0000 UTC" firstStartedPulling="2024-02-12 19:41:47.961673667 +0000 UTC m=+27.690639560" lastFinishedPulling="2024-02-12 19:41:52.620704592 +0000 UTC m=+32.349670475" observedRunningTime="2024-02-12 19:41:53.411817493 +0000 UTC m=+33.140783415" watchObservedRunningTime="2024-02-12 19:41:53.4122231 +0000 UTC m=+33.141188999" Feb 12 19:41:54.116765 kubelet[1407]: E0212 19:41:54.116702 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:55.117750 kubelet[1407]: E0212 19:41:55.117702 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:56.119042 kubelet[1407]: E0212 19:41:56.118957 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:57.120380 kubelet[1407]: E0212 19:41:57.120333 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:58.121971 kubelet[1407]: E0212 19:41:58.121923 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:41:58.265875 kubelet[1407]: I0212 19:41:58.265814 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:41:58.272735 systemd[1]: Created slice kubepods-besteffort-pod1ec4d0e7_d849_4e39_aba9_8230603e7635.slice. Feb 12 19:41:58.305885 kubelet[1407]: I0212 19:41:58.305805 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1ec4d0e7-d849-4e39-aba9-8230603e7635-data\") pod \"nfs-server-provisioner-0\" (UID: \"1ec4d0e7-d849-4e39-aba9-8230603e7635\") " pod="default/nfs-server-provisioner-0" Feb 12 19:41:58.305885 kubelet[1407]: I0212 19:41:58.305896 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9b24\" (UniqueName: \"kubernetes.io/projected/1ec4d0e7-d849-4e39-aba9-8230603e7635-kube-api-access-h9b24\") pod \"nfs-server-provisioner-0\" (UID: \"1ec4d0e7-d849-4e39-aba9-8230603e7635\") " pod="default/nfs-server-provisioner-0" Feb 12 19:41:58.578702 env[1116]: time="2024-02-12T19:41:58.578629406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1ec4d0e7-d849-4e39-aba9-8230603e7635,Namespace:default,Attempt:0,}" Feb 12 19:41:58.625749 systemd-networkd[1005]: lxcdf9f875468dd: Link UP Feb 12 19:41:58.633160 kernel: eth0: renamed from tmpabe4f Feb 12 19:41:58.644313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:41:58.644431 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdf9f875468dd: link becomes ready Feb 12 19:41:58.644119 systemd-networkd[1005]: lxcdf9f875468dd: Gained carrier Feb 12 19:41:58.923345 env[1116]: time="2024-02-12T19:41:58.922706542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:41:58.923345 env[1116]: time="2024-02-12T19:41:58.922773298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:41:58.923345 env[1116]: time="2024-02-12T19:41:58.922792965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:41:58.923974 env[1116]: time="2024-02-12T19:41:58.923836424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/abe4f1d1cf8b2b26a714dfa966343174cfa9b422b62dce13e8c257debe963674 pid=2598 runtime=io.containerd.runc.v2 Feb 12 19:41:58.944726 systemd[1]: Started cri-containerd-abe4f1d1cf8b2b26a714dfa966343174cfa9b422b62dce13e8c257debe963674.scope. Feb 12 19:41:59.005904 env[1116]: time="2024-02-12T19:41:59.005840288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1ec4d0e7-d849-4e39-aba9-8230603e7635,Namespace:default,Attempt:0,} returns sandbox id \"abe4f1d1cf8b2b26a714dfa966343174cfa9b422b62dce13e8c257debe963674\"" Feb 12 19:41:59.008685 env[1116]: time="2024-02-12T19:41:59.008532902Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:41:59.123644 kubelet[1407]: E0212 19:41:59.123595 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:00.124605 kubelet[1407]: E0212 19:42:00.124527 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:00.374506 systemd-networkd[1005]: lxcdf9f875468dd: Gained IPv6LL Feb 12 19:42:01.088559 kubelet[1407]: E0212 19:42:01.088493 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:01.124824 kubelet[1407]: E0212 19:42:01.124768 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:02.126354 kubelet[1407]: E0212 19:42:02.126290 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:02.679067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584955574.mount: Deactivated successfully. Feb 12 19:42:03.139752 kubelet[1407]: E0212 19:42:03.131687 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:04.132615 kubelet[1407]: E0212 19:42:04.132557 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:05.133760 kubelet[1407]: E0212 19:42:05.133695 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:06.135043 kubelet[1407]: E0212 19:42:06.134933 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:06.539620 env[1116]: time="2024-02-12T19:42:06.539412982Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:06.548869 env[1116]: time="2024-02-12T19:42:06.548793637Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:06.552999 env[1116]: time="2024-02-12T19:42:06.552921555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:06.557501 env[1116]: time="2024-02-12T19:42:06.557376773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:06.558752 env[1116]: time="2024-02-12T19:42:06.558696536Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 19:42:06.562819 env[1116]: time="2024-02-12T19:42:06.562745791Z" level=info msg="CreateContainer within sandbox \"abe4f1d1cf8b2b26a714dfa966343174cfa9b422b62dce13e8c257debe963674\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:42:06.577837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058028264.mount: Deactivated successfully. Feb 12 19:42:06.597432 env[1116]: time="2024-02-12T19:42:06.597160968Z" level=info msg="CreateContainer within sandbox \"abe4f1d1cf8b2b26a714dfa966343174cfa9b422b62dce13e8c257debe963674\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"33f6d145e2f7fdf2d81d31da456b1f119a8e7db93abbf8135f5b29f72d93e99c\"" Feb 12 19:42:06.598246 env[1116]: time="2024-02-12T19:42:06.598151445Z" level=info msg="StartContainer for \"33f6d145e2f7fdf2d81d31da456b1f119a8e7db93abbf8135f5b29f72d93e99c\"" Feb 12 19:42:06.631916 systemd[1]: Started cri-containerd-33f6d145e2f7fdf2d81d31da456b1f119a8e7db93abbf8135f5b29f72d93e99c.scope. Feb 12 19:42:06.696376 env[1116]: time="2024-02-12T19:42:06.696313705Z" level=info msg="StartContainer for \"33f6d145e2f7fdf2d81d31da456b1f119a8e7db93abbf8135f5b29f72d93e99c\" returns successfully" Feb 12 19:42:07.136211 kubelet[1407]: E0212 19:42:07.136126 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:08.137111 kubelet[1407]: E0212 19:42:08.137053 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:09.138617 kubelet[1407]: E0212 19:42:09.138501 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:10.139875 kubelet[1407]: E0212 19:42:10.139811 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:11.140483 kubelet[1407]: E0212 19:42:11.140431 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:12.141722 kubelet[1407]: E0212 19:42:12.141638 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:13.142323 kubelet[1407]: E0212 19:42:13.142271 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:14.143184 kubelet[1407]: E0212 19:42:14.143115 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:15.144753 kubelet[1407]: E0212 19:42:15.144667 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:16.145567 kubelet[1407]: E0212 19:42:16.145524 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:16.202278 kubelet[1407]: I0212 19:42:16.202213 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.650066845 podCreationTimestamp="2024-02-12 19:41:58 +0000 UTC" firstStartedPulling="2024-02-12 19:41:59.008219902 +0000 UTC m=+38.737185793" lastFinishedPulling="2024-02-12 19:42:06.560322014 +0000 UTC m=+46.289287901" observedRunningTime="2024-02-12 19:42:07.462255295 +0000 UTC m=+47.191221195" watchObservedRunningTime="2024-02-12 19:42:16.202168953 +0000 UTC m=+55.931134847" Feb 12 19:42:16.202552 kubelet[1407]: I0212 19:42:16.202480 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:42:16.210911 systemd[1]: Created slice kubepods-besteffort-pod4f0e155d_b3c9_444c_ba0a_d54b9adf08da.slice. Feb 12 19:42:16.381748 kubelet[1407]: I0212 19:42:16.381697 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c151678e-89d2-43ea-8511-3d3e9b886058\" (UniqueName: \"kubernetes.io/nfs/4f0e155d-b3c9-444c-ba0a-d54b9adf08da-pvc-c151678e-89d2-43ea-8511-3d3e9b886058\") pod \"test-pod-1\" (UID: \"4f0e155d-b3c9-444c-ba0a-d54b9adf08da\") " pod="default/test-pod-1" Feb 12 19:42:16.382126 kubelet[1407]: I0212 19:42:16.382100 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbvn\" (UniqueName: \"kubernetes.io/projected/4f0e155d-b3c9-444c-ba0a-d54b9adf08da-kube-api-access-mnbvn\") pod \"test-pod-1\" (UID: \"4f0e155d-b3c9-444c-ba0a-d54b9adf08da\") " pod="default/test-pod-1" Feb 12 19:42:16.525012 kernel: FS-Cache: Loaded Feb 12 19:42:16.575470 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:42:16.575655 kernel: RPC: Registered udp transport module. Feb 12 19:42:16.575693 kernel: RPC: Registered tcp transport module. Feb 12 19:42:16.576258 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:42:16.648044 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:42:16.850518 kernel: NFS: Registering the id_resolver key type Feb 12 19:42:16.850709 kernel: Key type id_resolver registered Feb 12 19:42:16.851425 kernel: Key type id_legacy registered Feb 12 19:42:17.146874 kubelet[1407]: E0212 19:42:17.146713 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:18.147483 kubelet[1407]: E0212 19:42:18.147437 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:19.148932 kubelet[1407]: E0212 19:42:19.148844 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:20.150247 kubelet[1407]: E0212 19:42:20.150136 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:21.088759 kubelet[1407]: E0212 19:42:21.088618 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:21.150692 kubelet[1407]: E0212 19:42:21.150616 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:22.151447 kubelet[1407]: E0212 19:42:22.151367 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:23.038418 nfsidmap[2790]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-e-e0f180cc85' Feb 12 19:42:23.152098 kubelet[1407]: E0212 19:42:23.152046 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:24.152587 kubelet[1407]: E0212 19:42:24.152456 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:25.153326 kubelet[1407]: E0212 19:42:25.153269 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:26.154757 kubelet[1407]: E0212 19:42:26.154696 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:27.155967 kubelet[1407]: E0212 19:42:27.155824 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:28.156888 kubelet[1407]: E0212 19:42:28.156818 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:29.157146 kubelet[1407]: E0212 19:42:29.157073 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:29.176019 nfsidmap[2795]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-e-e0f180cc85' Feb 12 19:42:29.414793 env[1116]: time="2024-02-12T19:42:29.414635229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4f0e155d-b3c9-444c-ba0a-d54b9adf08da,Namespace:default,Attempt:0,}" Feb 12 19:42:29.466238 systemd-networkd[1005]: lxc37c18276c2e8: Link UP Feb 12 19:42:29.472137 kernel: eth0: renamed from tmp59f9e Feb 12 19:42:29.480558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:42:29.480719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc37c18276c2e8: link becomes ready Feb 12 19:42:29.480683 systemd-networkd[1005]: lxc37c18276c2e8: Gained carrier Feb 12 19:42:29.819918 env[1116]: time="2024-02-12T19:42:29.819788400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:29.820197 env[1116]: time="2024-02-12T19:42:29.819938352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:29.820197 env[1116]: time="2024-02-12T19:42:29.819973884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:29.820585 env[1116]: time="2024-02-12T19:42:29.820365959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f9e5e0570e9e5de637856144758d0ef7ee8af61e06502e84e54d982e8df401 pid=2825 runtime=io.containerd.runc.v2 Feb 12 19:42:29.841042 systemd[1]: Started cri-containerd-59f9e5e0570e9e5de637856144758d0ef7ee8af61e06502e84e54d982e8df401.scope. Feb 12 19:42:29.912545 env[1116]: time="2024-02-12T19:42:29.912464953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4f0e155d-b3c9-444c-ba0a-d54b9adf08da,Namespace:default,Attempt:0,} returns sandbox id \"59f9e5e0570e9e5de637856144758d0ef7ee8af61e06502e84e54d982e8df401\"" Feb 12 19:42:29.917680 env[1116]: time="2024-02-12T19:42:29.917614660Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:42:30.158541 kubelet[1407]: E0212 19:42:30.158317 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:30.327022 env[1116]: time="2024-02-12T19:42:30.326289148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:30.329197 env[1116]: time="2024-02-12T19:42:30.329094294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:30.336621 env[1116]: time="2024-02-12T19:42:30.336551838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:30.339267 env[1116]: time="2024-02-12T19:42:30.339186300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:30.339884 env[1116]: time="2024-02-12T19:42:30.339827335Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:42:30.343786 env[1116]: time="2024-02-12T19:42:30.343710298Z" level=info msg="CreateContainer within sandbox \"59f9e5e0570e9e5de637856144758d0ef7ee8af61e06502e84e54d982e8df401\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:42:30.366164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143058707.mount: Deactivated successfully. Feb 12 19:42:30.380764 env[1116]: time="2024-02-12T19:42:30.380691202Z" level=info msg="CreateContainer within sandbox \"59f9e5e0570e9e5de637856144758d0ef7ee8af61e06502e84e54d982e8df401\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"aa064428492a5bef6229c90dae01e9b851e707ef2897af9336a9d0c31f8a2e20\"" Feb 12 19:42:30.385674 env[1116]: time="2024-02-12T19:42:30.385610519Z" level=info msg="StartContainer for \"aa064428492a5bef6229c90dae01e9b851e707ef2897af9336a9d0c31f8a2e20\"" Feb 12 19:42:30.416727 systemd[1]: Started cri-containerd-aa064428492a5bef6229c90dae01e9b851e707ef2897af9336a9d0c31f8a2e20.scope. Feb 12 19:42:30.473724 env[1116]: time="2024-02-12T19:42:30.473645482Z" level=info msg="StartContainer for \"aa064428492a5bef6229c90dae01e9b851e707ef2897af9336a9d0c31f8a2e20\" returns successfully" Feb 12 19:42:30.540457 kubelet[1407]: I0212 19:42:30.539558 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.114623505 podCreationTimestamp="2024-02-12 19:41:58 +0000 UTC" firstStartedPulling="2024-02-12 19:42:29.915265703 +0000 UTC m=+69.644231582" lastFinishedPulling="2024-02-12 19:42:30.340135977 +0000 UTC m=+70.069101856" observedRunningTime="2024-02-12 19:42:30.538034011 +0000 UTC m=+70.266999910" watchObservedRunningTime="2024-02-12 19:42:30.539493779 +0000 UTC m=+70.268459682" Feb 12 19:42:31.158506 systemd-networkd[1005]: lxc37c18276c2e8: Gained IPv6LL Feb 12 19:42:31.160735 kubelet[1407]: E0212 19:42:31.160699 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:32.162187 kubelet[1407]: E0212 19:42:32.162125 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:32.503241 env[1116]: time="2024-02-12T19:42:32.503056391Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:42:32.512888 env[1116]: time="2024-02-12T19:42:32.512827794Z" level=info msg="StopContainer for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" with timeout 1 (s)" Feb 12 19:42:32.513865 env[1116]: time="2024-02-12T19:42:32.513818973Z" level=info msg="Stop container \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" with signal terminated" Feb 12 19:42:32.524082 systemd-networkd[1005]: lxc_health: Link DOWN Feb 12 19:42:32.524093 systemd-networkd[1005]: lxc_health: Lost carrier Feb 12 19:42:32.558621 systemd[1]: cri-containerd-2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0.scope: Deactivated successfully. Feb 12 19:42:32.559031 systemd[1]: cri-containerd-2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0.scope: Consumed 10.386s CPU time. Feb 12 19:42:32.587619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0-rootfs.mount: Deactivated successfully. Feb 12 19:42:32.614038 env[1116]: time="2024-02-12T19:42:32.613943878Z" level=info msg="shim disconnected" id=2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0 Feb 12 19:42:32.614038 env[1116]: time="2024-02-12T19:42:32.614035488Z" level=warning msg="cleaning up after shim disconnected" id=2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0 namespace=k8s.io Feb 12 19:42:32.614317 env[1116]: time="2024-02-12T19:42:32.614057780Z" level=info msg="cleaning up dead shim" Feb 12 19:42:32.626678 env[1116]: time="2024-02-12T19:42:32.626533204Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2955 runtime=io.containerd.runc.v2\n" Feb 12 19:42:32.630480 env[1116]: time="2024-02-12T19:42:32.630393180Z" level=info msg="StopContainer for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" returns successfully" Feb 12 19:42:32.631485 env[1116]: time="2024-02-12T19:42:32.631439533Z" level=info msg="StopPodSandbox for \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\"" Feb 12 19:42:32.631747 env[1116]: time="2024-02-12T19:42:32.631716320Z" level=info msg="Container to stop \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:42:32.631841 env[1116]: time="2024-02-12T19:42:32.631823424Z" level=info msg="Container to stop \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:42:32.631909 env[1116]: time="2024-02-12T19:42:32.631894611Z" level=info msg="Container to stop \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:42:32.632009 env[1116]: time="2024-02-12T19:42:32.631957676Z" level=info msg="Container to stop \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:42:32.632093 env[1116]: time="2024-02-12T19:42:32.632076048Z" level=info msg="Container to stop \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:42:32.634529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf-shm.mount: Deactivated successfully. Feb 12 19:42:32.643544 systemd[1]: cri-containerd-181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf.scope: Deactivated successfully. Feb 12 19:42:32.669960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf-rootfs.mount: Deactivated successfully. Feb 12 19:42:32.679593 env[1116]: time="2024-02-12T19:42:32.679524316Z" level=info msg="shim disconnected" id=181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf Feb 12 19:42:32.679593 env[1116]: time="2024-02-12T19:42:32.679594045Z" level=warning msg="cleaning up after shim disconnected" id=181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf namespace=k8s.io Feb 12 19:42:32.679593 env[1116]: time="2024-02-12T19:42:32.679607806Z" level=info msg="cleaning up dead shim" Feb 12 19:42:32.691838 env[1116]: time="2024-02-12T19:42:32.691765981Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2988 runtime=io.containerd.runc.v2\n" Feb 12 19:42:32.692222 env[1116]: time="2024-02-12T19:42:32.692178413Z" level=info msg="TearDown network for sandbox \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" successfully" Feb 12 19:42:32.692222 env[1116]: time="2024-02-12T19:42:32.692209593Z" level=info msg="StopPodSandbox for \"181b20cecc6d19cca75aed3d1457bd1d445789eac862a13827a38294f10ccbaf\" returns successfully" Feb 12 19:42:32.822356 kubelet[1407]: I0212 19:42:32.822311 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-hubble-tls\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.823184 kubelet[1407]: I0212 19:42:32.822670 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-hostproc\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.823434 kubelet[1407]: I0212 19:42:32.823397 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45zzs\" (UniqueName: \"kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-kube-api-access-45zzs\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.823744 kubelet[1407]: I0212 19:42:32.823708 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-cgroup\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.824635 kubelet[1407]: I0212 19:42:32.822746 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-hostproc" (OuterVolumeSpecName: "hostproc") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.824760 kubelet[1407]: I0212 19:42:32.824104 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.824904 kubelet[1407]: I0212 19:42:32.824887 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cni-path\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.825012 kubelet[1407]: I0212 19:42:32.824995 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-bpf-maps\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.825163 kubelet[1407]: I0212 19:42:32.825153 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-net\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.825413 kubelet[1407]: I0212 19:42:32.825397 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-config-path\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.826027 kubelet[1407]: I0212 19:42:32.826008 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-xtables-lock\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.826204 kubelet[1407]: I0212 19:42:32.826191 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e16adb87-01a6-4c54-aad5-72939bdd5902-clustermesh-secrets\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.826411 kubelet[1407]: I0212 19:42:32.826395 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-run\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.826551 kubelet[1407]: I0212 19:42:32.826540 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-lib-modules\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.826789 kubelet[1407]: I0212 19:42:32.826774 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-kernel\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.826953 kubelet[1407]: I0212 19:42:32.826939 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-etc-cni-netd\") pod \"e16adb87-01a6-4c54-aad5-72939bdd5902\" (UID: \"e16adb87-01a6-4c54-aad5-72939bdd5902\") " Feb 12 19:42:32.827166 kubelet[1407]: I0212 19:42:32.827140 1407 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-hostproc\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.827417 kubelet[1407]: I0212 19:42:32.827398 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-cgroup\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.827417 kubelet[1407]: I0212 19:42:32.825295 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cni-path" (OuterVolumeSpecName: "cni-path") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.827513 kubelet[1407]: I0212 19:42:32.827326 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.827513 kubelet[1407]: I0212 19:42:32.826577 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.827513 kubelet[1407]: I0212 19:42:32.825334 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.827513 kubelet[1407]: I0212 19:42:32.825347 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.827513 kubelet[1407]: W0212 19:42:32.825926 1407 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e16adb87-01a6-4c54-aad5-72939bdd5902/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:42:32.827885 kubelet[1407]: I0212 19:42:32.827348 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.828036 kubelet[1407]: I0212 19:42:32.827361 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.828133 kubelet[1407]: I0212 19:42:32.827373 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:32.832820 systemd[1]: var-lib-kubelet-pods-e16adb87\x2d01a6\x2d4c54\x2daad5\x2d72939bdd5902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d45zzs.mount: Deactivated successfully. Feb 12 19:42:32.834317 kubelet[1407]: I0212 19:42:32.834270 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:42:32.835054 kubelet[1407]: I0212 19:42:32.835017 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:42:32.835706 kubelet[1407]: I0212 19:42:32.835666 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-kube-api-access-45zzs" (OuterVolumeSpecName: "kube-api-access-45zzs") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "kube-api-access-45zzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:42:32.837263 kubelet[1407]: I0212 19:42:32.837215 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16adb87-01a6-4c54-aad5-72939bdd5902-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e16adb87-01a6-4c54-aad5-72939bdd5902" (UID: "e16adb87-01a6-4c54-aad5-72939bdd5902"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:42:32.928797 kubelet[1407]: I0212 19:42:32.928730 1407 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cni-path\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.928797 kubelet[1407]: I0212 19:42:32.928783 1407 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-bpf-maps\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.928797 kubelet[1407]: I0212 19:42:32.928800 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-net\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.928797 kubelet[1407]: I0212 19:42:32.928818 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-config-path\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928835 1407 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e16adb87-01a6-4c54-aad5-72939bdd5902-clustermesh-secrets\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928849 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-cilium-run\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928863 1407 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-lib-modules\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928878 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-host-proc-sys-kernel\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928893 1407 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-etc-cni-netd\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928906 1407 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e16adb87-01a6-4c54-aad5-72939bdd5902-xtables-lock\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928920 1407 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-hubble-tls\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:32.929144 kubelet[1407]: I0212 19:42:32.928937 1407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-45zzs\" (UniqueName: \"kubernetes.io/projected/e16adb87-01a6-4c54-aad5-72939bdd5902-kube-api-access-45zzs\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:33.162924 kubelet[1407]: E0212 19:42:33.162768 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:33.269785 systemd[1]: Removed slice kubepods-burstable-pode16adb87_01a6_4c54_aad5_72939bdd5902.slice. Feb 12 19:42:33.269900 systemd[1]: kubepods-burstable-pode16adb87_01a6_4c54_aad5_72939bdd5902.slice: Consumed 10.591s CPU time. Feb 12 19:42:33.470176 systemd[1]: var-lib-kubelet-pods-e16adb87\x2d01a6\x2d4c54\x2daad5\x2d72939bdd5902-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:42:33.470518 systemd[1]: var-lib-kubelet-pods-e16adb87\x2d01a6\x2d4c54\x2daad5\x2d72939bdd5902-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:42:33.533136 kubelet[1407]: I0212 19:42:33.533106 1407 scope.go:115] "RemoveContainer" containerID="2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0" Feb 12 19:42:33.537042 env[1116]: time="2024-02-12T19:42:33.536970147Z" level=info msg="RemoveContainer for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\"" Feb 12 19:42:33.542726 env[1116]: time="2024-02-12T19:42:33.542671220Z" level=info msg="RemoveContainer for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" returns successfully" Feb 12 19:42:33.543405 kubelet[1407]: I0212 19:42:33.543358 1407 scope.go:115] "RemoveContainer" containerID="efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0" Feb 12 19:42:33.545585 env[1116]: time="2024-02-12T19:42:33.545544838Z" level=info msg="RemoveContainer for \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\"" Feb 12 19:42:33.550179 env[1116]: time="2024-02-12T19:42:33.550121925Z" level=info msg="RemoveContainer for \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\" returns successfully" Feb 12 19:42:33.550806 kubelet[1407]: I0212 19:42:33.550776 1407 scope.go:115] "RemoveContainer" containerID="774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6" Feb 12 19:42:33.552954 env[1116]: time="2024-02-12T19:42:33.552867963Z" level=info msg="RemoveContainer for \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\"" Feb 12 19:42:33.557513 env[1116]: time="2024-02-12T19:42:33.557405032Z" level=info msg="RemoveContainer for \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\" returns successfully" Feb 12 19:42:33.557915 kubelet[1407]: I0212 19:42:33.557851 1407 scope.go:115] "RemoveContainer" containerID="c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef" Feb 12 19:42:33.561237 env[1116]: time="2024-02-12T19:42:33.561167824Z" level=info msg="RemoveContainer for \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\"" Feb 12 19:42:33.566351 env[1116]: time="2024-02-12T19:42:33.566301534Z" level=info msg="RemoveContainer for \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\" returns successfully" Feb 12 19:42:33.566790 kubelet[1407]: I0212 19:42:33.566763 1407 scope.go:115] "RemoveContainer" containerID="8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694" Feb 12 19:42:33.568827 env[1116]: time="2024-02-12T19:42:33.568785770Z" level=info msg="RemoveContainer for \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\"" Feb 12 19:42:33.572633 env[1116]: time="2024-02-12T19:42:33.572585297Z" level=info msg="RemoveContainer for \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\" returns successfully" Feb 12 19:42:33.573227 kubelet[1407]: I0212 19:42:33.573203 1407 scope.go:115] "RemoveContainer" containerID="2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0" Feb 12 19:42:33.573729 env[1116]: time="2024-02-12T19:42:33.573647886Z" level=error msg="ContainerStatus for \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\": not found" Feb 12 19:42:33.574062 kubelet[1407]: E0212 19:42:33.574043 1407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\": not found" containerID="2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0" Feb 12 19:42:33.574125 kubelet[1407]: I0212 19:42:33.574098 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0} err="failed to get container status \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"2662f7094ad25699151c6efa1cde3d5bbfa0ca5516a7d9b598d7c8ec1622f9a0\": not found" Feb 12 19:42:33.574125 kubelet[1407]: I0212 19:42:33.574113 1407 scope.go:115] "RemoveContainer" containerID="efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0" Feb 12 19:42:33.574415 env[1116]: time="2024-02-12T19:42:33.574364027Z" level=error msg="ContainerStatus for \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\": not found" Feb 12 19:42:33.574733 kubelet[1407]: E0212 19:42:33.574611 1407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\": not found" containerID="efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0" Feb 12 19:42:33.574733 kubelet[1407]: I0212 19:42:33.574640 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0} err="failed to get container status \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\": rpc error: code = NotFound desc = an error occurred when try to find container \"efda407d3df61ca7507a4ace5a720453c138c574a26d3d73df2cc187493d6ec0\": not found" Feb 12 19:42:33.574733 kubelet[1407]: I0212 19:42:33.574654 1407 scope.go:115] "RemoveContainer" containerID="774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6" Feb 12 19:42:33.575056 env[1116]: time="2024-02-12T19:42:33.574996520Z" level=error msg="ContainerStatus for \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\": not found" Feb 12 19:42:33.575332 kubelet[1407]: E0212 19:42:33.575312 1407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\": not found" containerID="774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6" Feb 12 19:42:33.575388 kubelet[1407]: I0212 19:42:33.575344 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6} err="failed to get container status \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"774669d9abd5519be3f8fb777ecbc02ca8926231f086394f4af45bacb32778c6\": not found" Feb 12 19:42:33.575388 kubelet[1407]: I0212 19:42:33.575370 1407 scope.go:115] "RemoveContainer" containerID="c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef" Feb 12 19:42:33.575640 env[1116]: time="2024-02-12T19:42:33.575589841Z" level=error msg="ContainerStatus for \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\": not found" Feb 12 19:42:33.575966 kubelet[1407]: E0212 19:42:33.575864 1407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\": not found" containerID="c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef" Feb 12 19:42:33.575966 kubelet[1407]: I0212 19:42:33.575895 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef} err="failed to get container status \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6dd482f37d25bf34447561d0c9bbae379b97f268ea9b070e1b2df9d431918ef\": not found" Feb 12 19:42:33.575966 kubelet[1407]: I0212 19:42:33.575906 1407 scope.go:115] "RemoveContainer" containerID="8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694" Feb 12 19:42:33.576487 env[1116]: time="2024-02-12T19:42:33.576427489Z" level=error msg="ContainerStatus for \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\": not found" Feb 12 19:42:33.576752 kubelet[1407]: E0212 19:42:33.576711 1407 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\": not found" containerID="8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694" Feb 12 19:42:33.576752 kubelet[1407]: I0212 19:42:33.576735 1407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694} err="failed to get container status \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\": rpc error: code = NotFound desc = an error occurred when try to find container \"8593bb11d3c7fa82754c3248bdd550fe97b8fbdeb315db72334a9d49eb9d8694\": not found" Feb 12 19:42:34.164734 kubelet[1407]: E0212 19:42:34.164666 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:35.165801 kubelet[1407]: E0212 19:42:35.165748 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:35.266515 kubelet[1407]: I0212 19:42:35.265871 1407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e16adb87-01a6-4c54-aad5-72939bdd5902 path="/var/lib/kubelet/pods/e16adb87-01a6-4c54-aad5-72939bdd5902/volumes" Feb 12 19:42:35.757193 kubelet[1407]: I0212 19:42:35.757139 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:42:35.757503 kubelet[1407]: E0212 19:42:35.757485 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16adb87-01a6-4c54-aad5-72939bdd5902" containerName="mount-cgroup" Feb 12 19:42:35.757596 kubelet[1407]: E0212 19:42:35.757584 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16adb87-01a6-4c54-aad5-72939bdd5902" containerName="apply-sysctl-overwrites" Feb 12 19:42:35.757706 kubelet[1407]: E0212 19:42:35.757692 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16adb87-01a6-4c54-aad5-72939bdd5902" containerName="clean-cilium-state" Feb 12 19:42:35.757777 kubelet[1407]: E0212 19:42:35.757767 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16adb87-01a6-4c54-aad5-72939bdd5902" containerName="mount-bpf-fs" Feb 12 19:42:35.757839 kubelet[1407]: E0212 19:42:35.757830 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e16adb87-01a6-4c54-aad5-72939bdd5902" containerName="cilium-agent" Feb 12 19:42:35.757916 kubelet[1407]: I0212 19:42:35.757906 1407 memory_manager.go:346] "RemoveStaleState removing state" podUID="e16adb87-01a6-4c54-aad5-72939bdd5902" containerName="cilium-agent" Feb 12 19:42:35.763932 systemd[1]: Created slice kubepods-besteffort-pod59d34bb0_345e_498a_9548_7939d0758fb6.slice. Feb 12 19:42:35.800522 kubelet[1407]: I0212 19:42:35.800469 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:42:35.808178 systemd[1]: Created slice kubepods-burstable-pod05a41eff_6a2c_4b51_8a81_3adb5db6fc17.slice. Feb 12 19:42:35.849674 kubelet[1407]: I0212 19:42:35.849619 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdn4s\" (UniqueName: \"kubernetes.io/projected/59d34bb0-345e-498a-9548-7939d0758fb6-kube-api-access-zdn4s\") pod \"cilium-operator-574c4bb98d-hcrpw\" (UID: \"59d34bb0-345e-498a-9548-7939d0758fb6\") " pod="kube-system/cilium-operator-574c4bb98d-hcrpw" Feb 12 19:42:35.849861 kubelet[1407]: I0212 19:42:35.849700 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/59d34bb0-345e-498a-9548-7939d0758fb6-cilium-config-path\") pod \"cilium-operator-574c4bb98d-hcrpw\" (UID: \"59d34bb0-345e-498a-9548-7939d0758fb6\") " pod="kube-system/cilium-operator-574c4bb98d-hcrpw" Feb 12 19:42:35.951117 kubelet[1407]: I0212 19:42:35.951062 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-bpf-maps\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951403 kubelet[1407]: I0212 19:42:35.951384 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cni-path\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951546 kubelet[1407]: I0212 19:42:35.951508 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-lib-modules\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951631 kubelet[1407]: I0212 19:42:35.951562 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-config-path\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951631 kubelet[1407]: I0212 19:42:35.951585 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-kernel\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951631 kubelet[1407]: I0212 19:42:35.951606 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-run\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951631 kubelet[1407]: I0212 19:42:35.951625 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-cgroup\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951802 kubelet[1407]: I0212 19:42:35.951649 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-etc-cni-netd\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951802 kubelet[1407]: I0212 19:42:35.951669 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-clustermesh-secrets\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951802 kubelet[1407]: I0212 19:42:35.951689 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-net\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951802 kubelet[1407]: I0212 19:42:35.951716 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrhd4\" (UniqueName: \"kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-kube-api-access-wrhd4\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951802 kubelet[1407]: I0212 19:42:35.951745 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-xtables-lock\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951936 kubelet[1407]: I0212 19:42:35.951767 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-ipsec-secrets\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951936 kubelet[1407]: I0212 19:42:35.951808 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hostproc\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:35.951936 kubelet[1407]: I0212 19:42:35.951836 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hubble-tls\") pod \"cilium-86r4p\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " pod="kube-system/cilium-86r4p" Feb 12 19:42:36.071267 kubelet[1407]: E0212 19:42:36.070647 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:36.071790 env[1116]: time="2024-02-12T19:42:36.071682050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-hcrpw,Uid:59d34bb0-345e-498a-9548-7939d0758fb6,Namespace:kube-system,Attempt:0,}" Feb 12 19:42:36.100430 env[1116]: time="2024-02-12T19:42:36.100250336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:36.100430 env[1116]: time="2024-02-12T19:42:36.100299934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:36.100430 env[1116]: time="2024-02-12T19:42:36.100323821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:36.100879 env[1116]: time="2024-02-12T19:42:36.100805587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cf2b225c1cf3ed63d06d7ef0ab813a3deba1d753436b1929b7fae329e1439c2 pid=3018 runtime=io.containerd.runc.v2 Feb 12 19:42:36.115976 kubelet[1407]: E0212 19:42:36.115900 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:36.117504 env[1116]: time="2024-02-12T19:42:36.117135976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86r4p,Uid:05a41eff-6a2c-4b51-8a81-3adb5db6fc17,Namespace:kube-system,Attempt:0,}" Feb 12 19:42:36.119381 systemd[1]: Started cri-containerd-8cf2b225c1cf3ed63d06d7ef0ab813a3deba1d753436b1929b7fae329e1439c2.scope. Feb 12 19:42:36.167035 kubelet[1407]: E0212 19:42:36.166966 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:36.174376 env[1116]: time="2024-02-12T19:42:36.174278971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:36.174673 env[1116]: time="2024-02-12T19:42:36.174627670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:36.174853 env[1116]: time="2024-02-12T19:42:36.174818196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:36.175249 env[1116]: time="2024-02-12T19:42:36.175188913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f pid=3051 runtime=io.containerd.runc.v2 Feb 12 19:42:36.186351 env[1116]: time="2024-02-12T19:42:36.186304106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-hcrpw,Uid:59d34bb0-345e-498a-9548-7939d0758fb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cf2b225c1cf3ed63d06d7ef0ab813a3deba1d753436b1929b7fae329e1439c2\"" Feb 12 19:42:36.188231 kubelet[1407]: E0212 19:42:36.187590 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:36.189387 env[1116]: time="2024-02-12T19:42:36.189348643Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:42:36.196215 systemd[1]: Started cri-containerd-e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f.scope. Feb 12 19:42:36.236041 kubelet[1407]: E0212 19:42:36.235950 1407 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:42:36.249885 env[1116]: time="2024-02-12T19:42:36.249835443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86r4p,Uid:05a41eff-6a2c-4b51-8a81-3adb5db6fc17,Namespace:kube-system,Attempt:0,} returns sandbox id \"e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f\"" Feb 12 19:42:36.252021 kubelet[1407]: E0212 19:42:36.251301 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:36.256451 env[1116]: time="2024-02-12T19:42:36.256244340Z" level=info msg="CreateContainer within sandbox \"e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:42:36.305949 env[1116]: time="2024-02-12T19:42:36.305883546Z" level=info msg="CreateContainer within sandbox \"e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\"" Feb 12 19:42:36.307352 env[1116]: time="2024-02-12T19:42:36.307296310Z" level=info msg="StartContainer for \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\"" Feb 12 19:42:36.331784 systemd[1]: Started cri-containerd-16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a.scope. Feb 12 19:42:36.351541 systemd[1]: cri-containerd-16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a.scope: Deactivated successfully. Feb 12 19:42:36.387214 env[1116]: time="2024-02-12T19:42:36.387136086Z" level=info msg="shim disconnected" id=16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a Feb 12 19:42:36.387214 env[1116]: time="2024-02-12T19:42:36.387211105Z" level=warning msg="cleaning up after shim disconnected" id=16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a namespace=k8s.io Feb 12 19:42:36.387214 env[1116]: time="2024-02-12T19:42:36.387225631Z" level=info msg="cleaning up dead shim" Feb 12 19:42:36.399562 env[1116]: time="2024-02-12T19:42:36.399490849Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3115 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:42:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:42:36.400011 env[1116]: time="2024-02-12T19:42:36.399872038Z" level=error msg="copy shim log" error="read /proc/self/fd/81: file already closed" Feb 12 19:42:36.400565 env[1116]: time="2024-02-12T19:42:36.400517599Z" level=error msg="Failed to pipe stdout of container \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\"" error="reading from a closed fifo" Feb 12 19:42:36.401105 env[1116]: time="2024-02-12T19:42:36.401053995Z" level=error msg="Failed to pipe stderr of container \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\"" error="reading from a closed fifo" Feb 12 19:42:36.408107 env[1116]: time="2024-02-12T19:42:36.407962641Z" level=error msg="StartContainer for \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:42:36.408669 kubelet[1407]: E0212 19:42:36.408611 1407 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a" Feb 12 19:42:36.408898 kubelet[1407]: E0212 19:42:36.408795 1407 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:42:36.408898 kubelet[1407]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:42:36.408898 kubelet[1407]: rm /hostbin/cilium-mount Feb 12 19:42:36.409014 kubelet[1407]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wrhd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-86r4p_kube-system(05a41eff-6a2c-4b51-8a81-3adb5db6fc17): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:42:36.409014 kubelet[1407]: E0212 19:42:36.408855 1407 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-86r4p" podUID=05a41eff-6a2c-4b51-8a81-3adb5db6fc17 Feb 12 19:42:36.545969 env[1116]: time="2024-02-12T19:42:36.545916706Z" level=info msg="StopPodSandbox for \"e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f\"" Feb 12 19:42:36.546192 env[1116]: time="2024-02-12T19:42:36.545995901Z" level=info msg="Container to stop \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:42:36.555513 systemd[1]: cri-containerd-e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f.scope: Deactivated successfully. Feb 12 19:42:36.600881 env[1116]: time="2024-02-12T19:42:36.600726979Z" level=info msg="shim disconnected" id=e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f Feb 12 19:42:36.601345 env[1116]: time="2024-02-12T19:42:36.601313730Z" level=warning msg="cleaning up after shim disconnected" id=e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f namespace=k8s.io Feb 12 19:42:36.601474 env[1116]: time="2024-02-12T19:42:36.601456111Z" level=info msg="cleaning up dead shim" Feb 12 19:42:36.614308 env[1116]: time="2024-02-12T19:42:36.614260112Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3145 runtime=io.containerd.runc.v2\n" Feb 12 19:42:36.614941 env[1116]: time="2024-02-12T19:42:36.614896063Z" level=info msg="TearDown network for sandbox \"e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f\" successfully" Feb 12 19:42:36.615142 env[1116]: time="2024-02-12T19:42:36.615118496Z" level=info msg="StopPodSandbox for \"e70c7cba798c73813860c018a31e08377e9004fd31a2dddd95b1e965775fee1f\" returns successfully" Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757145 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-ipsec-secrets\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757199 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-net\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757220 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrhd4\" (UniqueName: \"kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-kube-api-access-wrhd4\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757236 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-etc-cni-netd\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757254 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-xtables-lock\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757278 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hubble-tls\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757299 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-config-path\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757323 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-kernel\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757341 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cni-path\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757359 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-run\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757378 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-cgroup\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757401 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-clustermesh-secrets\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757419 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hostproc\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757436 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-bpf-maps\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757453 1407 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-lib-modules\") pod \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\" (UID: \"05a41eff-6a2c-4b51-8a81-3adb5db6fc17\") " Feb 12 19:42:36.759419 kubelet[1407]: I0212 19:42:36.757532 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.760174 kubelet[1407]: I0212 19:42:36.757558 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.760174 kubelet[1407]: I0212 19:42:36.757913 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.760174 kubelet[1407]: I0212 19:42:36.757953 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.761851 kubelet[1407]: I0212 19:42:36.761798 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:42:36.762749 kubelet[1407]: I0212 19:42:36.762713 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:42:36.763129 kubelet[1407]: W0212 19:42:36.763090 1407 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/05a41eff-6a2c-4b51-8a81-3adb5db6fc17/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:42:36.765310 kubelet[1407]: I0212 19:42:36.765260 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-kube-api-access-wrhd4" (OuterVolumeSpecName: "kube-api-access-wrhd4") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "kube-api-access-wrhd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:42:36.765439 kubelet[1407]: I0212 19:42:36.765335 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.765439 kubelet[1407]: I0212 19:42:36.765355 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.765439 kubelet[1407]: I0212 19:42:36.765370 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cni-path" (OuterVolumeSpecName: "cni-path") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.765439 kubelet[1407]: I0212 19:42:36.765384 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.765439 kubelet[1407]: I0212 19:42:36.765407 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hostproc" (OuterVolumeSpecName: "hostproc") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.765692 kubelet[1407]: I0212 19:42:36.765290 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:42:36.765824 kubelet[1407]: I0212 19:42:36.765803 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:42:36.769242 kubelet[1407]: I0212 19:42:36.769178 1407 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "05a41eff-6a2c-4b51-8a81-3adb5db6fc17" (UID: "05a41eff-6a2c-4b51-8a81-3adb5db6fc17"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:42:36.857870 kubelet[1407]: I0212 19:42:36.857732 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-net\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858145 kubelet[1407]: I0212 19:42:36.858122 1407 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wrhd4\" (UniqueName: \"kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-kube-api-access-wrhd4\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858256 kubelet[1407]: I0212 19:42:36.858244 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-ipsec-secrets\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858407 kubelet[1407]: I0212 19:42:36.858394 1407 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hubble-tls\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858540 kubelet[1407]: I0212 19:42:36.858525 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-config-path\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858669 kubelet[1407]: I0212 19:42:36.858656 1407 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-host-proc-sys-kernel\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858767 kubelet[1407]: I0212 19:42:36.858755 1407 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-etc-cni-netd\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.858910 kubelet[1407]: I0212 19:42:36.858896 1407 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-xtables-lock\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859033 kubelet[1407]: I0212 19:42:36.859018 1407 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cni-path\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859244 kubelet[1407]: I0212 19:42:36.859228 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-run\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859349 kubelet[1407]: I0212 19:42:36.859337 1407 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-clustermesh-secrets\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859442 kubelet[1407]: I0212 19:42:36.859431 1407 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-hostproc\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859538 kubelet[1407]: I0212 19:42:36.859524 1407 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-bpf-maps\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859638 kubelet[1407]: I0212 19:42:36.859626 1407 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-lib-modules\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.859729 kubelet[1407]: I0212 19:42:36.859718 1407 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05a41eff-6a2c-4b51-8a81-3adb5db6fc17-cilium-cgroup\") on node \"146.190.38.70\" DevicePath \"\"" Feb 12 19:42:36.973596 systemd[1]: var-lib-kubelet-pods-05a41eff\x2d6a2c\x2d4b51\x2d8a81\x2d3adb5db6fc17-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:42:36.973738 systemd[1]: var-lib-kubelet-pods-05a41eff\x2d6a2c\x2d4b51\x2d8a81\x2d3adb5db6fc17-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:42:36.973830 systemd[1]: var-lib-kubelet-pods-05a41eff\x2d6a2c\x2d4b51\x2d8a81\x2d3adb5db6fc17-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:42:37.169116 kubelet[1407]: E0212 19:42:37.168645 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:37.269795 systemd[1]: Removed slice kubepods-burstable-pod05a41eff_6a2c_4b51_8a81_3adb5db6fc17.slice. Feb 12 19:42:37.502433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411880153.mount: Deactivated successfully. Feb 12 19:42:37.553994 kubelet[1407]: I0212 19:42:37.553085 1407 scope.go:115] "RemoveContainer" containerID="16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a" Feb 12 19:42:37.557386 env[1116]: time="2024-02-12T19:42:37.557329485Z" level=info msg="RemoveContainer for \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\"" Feb 12 19:42:37.560886 env[1116]: time="2024-02-12T19:42:37.560799975Z" level=info msg="RemoveContainer for \"16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a\" returns successfully" Feb 12 19:42:37.614504 kubelet[1407]: I0212 19:42:37.614444 1407 topology_manager.go:212] "Topology Admit Handler" Feb 12 19:42:37.614832 kubelet[1407]: E0212 19:42:37.614531 1407 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05a41eff-6a2c-4b51-8a81-3adb5db6fc17" containerName="mount-cgroup" Feb 12 19:42:37.614832 kubelet[1407]: I0212 19:42:37.614566 1407 memory_manager.go:346] "RemoveStaleState removing state" podUID="05a41eff-6a2c-4b51-8a81-3adb5db6fc17" containerName="mount-cgroup" Feb 12 19:42:37.626457 systemd[1]: Created slice kubepods-burstable-pod7d47683b_db35_4593_94a4_fe868489cffc.slice. Feb 12 19:42:37.769329 kubelet[1407]: I0212 19:42:37.769172 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-cni-path\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.769329 kubelet[1407]: I0212 19:42:37.769249 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-host-proc-sys-net\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.769329 kubelet[1407]: I0212 19:42:37.769280 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d47683b-db35-4593-94a4-fe868489cffc-hubble-tls\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770426 kubelet[1407]: I0212 19:42:37.770387 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-cilium-run\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770550 kubelet[1407]: I0212 19:42:37.770449 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-hostproc\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770550 kubelet[1407]: I0212 19:42:37.770472 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-cilium-cgroup\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770550 kubelet[1407]: I0212 19:42:37.770503 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d47683b-db35-4593-94a4-fe868489cffc-cilium-config-path\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770550 kubelet[1407]: I0212 19:42:37.770524 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-etc-cni-netd\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770550 kubelet[1407]: I0212 19:42:37.770548 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d47683b-db35-4593-94a4-fe868489cffc-clustermesh-secrets\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770701 kubelet[1407]: I0212 19:42:37.770567 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-host-proc-sys-kernel\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770701 kubelet[1407]: I0212 19:42:37.770596 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-xtables-lock\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770701 kubelet[1407]: I0212 19:42:37.770626 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt4jc\" (UniqueName: \"kubernetes.io/projected/7d47683b-db35-4593-94a4-fe868489cffc-kube-api-access-zt4jc\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770701 kubelet[1407]: I0212 19:42:37.770650 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-bpf-maps\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770701 kubelet[1407]: I0212 19:42:37.770675 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d47683b-db35-4593-94a4-fe868489cffc-lib-modules\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.770956 kubelet[1407]: I0212 19:42:37.770701 1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d47683b-db35-4593-94a4-fe868489cffc-cilium-ipsec-secrets\") pod \"cilium-bc26f\" (UID: \"7d47683b-db35-4593-94a4-fe868489cffc\") " pod="kube-system/cilium-bc26f" Feb 12 19:42:37.935946 kubelet[1407]: E0212 19:42:37.935399 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:37.936322 env[1116]: time="2024-02-12T19:42:37.936253142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bc26f,Uid:7d47683b-db35-4593-94a4-fe868489cffc,Namespace:kube-system,Attempt:0,}" Feb 12 19:42:37.997344 env[1116]: time="2024-02-12T19:42:37.997056047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:42:37.997344 env[1116]: time="2024-02-12T19:42:37.997134709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:42:37.997344 env[1116]: time="2024-02-12T19:42:37.997147203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:42:37.997657 env[1116]: time="2024-02-12T19:42:37.997412849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488 pid=3172 runtime=io.containerd.runc.v2 Feb 12 19:42:38.030491 systemd[1]: Started cri-containerd-85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488.scope. Feb 12 19:42:38.085034 env[1116]: time="2024-02-12T19:42:38.084946900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bc26f,Uid:7d47683b-db35-4593-94a4-fe868489cffc,Namespace:kube-system,Attempt:0,} returns sandbox id \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\"" Feb 12 19:42:38.086847 kubelet[1407]: E0212 19:42:38.086197 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:38.089314 env[1116]: time="2024-02-12T19:42:38.089259272Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:42:38.106765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066204784.mount: Deactivated successfully. Feb 12 19:42:38.127613 env[1116]: time="2024-02-12T19:42:38.127500627Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036\"" Feb 12 19:42:38.128701 env[1116]: time="2024-02-12T19:42:38.128642065Z" level=info msg="StartContainer for \"b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036\"" Feb 12 19:42:38.155553 systemd[1]: Started cri-containerd-b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036.scope. Feb 12 19:42:38.169592 kubelet[1407]: E0212 19:42:38.169242 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:38.219827 env[1116]: time="2024-02-12T19:42:38.219739617Z" level=info msg="StartContainer for \"b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036\" returns successfully" Feb 12 19:42:38.231504 systemd[1]: cri-containerd-b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036.scope: Deactivated successfully. Feb 12 19:42:38.313469 env[1116]: time="2024-02-12T19:42:38.313408142Z" level=info msg="shim disconnected" id=b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036 Feb 12 19:42:38.313469 env[1116]: time="2024-02-12T19:42:38.313462783Z" level=warning msg="cleaning up after shim disconnected" id=b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036 namespace=k8s.io Feb 12 19:42:38.313469 env[1116]: time="2024-02-12T19:42:38.313477529Z" level=info msg="cleaning up dead shim" Feb 12 19:42:38.340112 env[1116]: time="2024-02-12T19:42:38.340052520Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3259 runtime=io.containerd.runc.v2\n" Feb 12 19:42:38.557694 kubelet[1407]: E0212 19:42:38.557647 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:38.561054 env[1116]: time="2024-02-12T19:42:38.560973281Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:42:38.599335 env[1116]: time="2024-02-12T19:42:38.599160717Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a\"" Feb 12 19:42:38.600321 env[1116]: time="2024-02-12T19:42:38.600271865Z" level=info msg="StartContainer for \"31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a\"" Feb 12 19:42:38.630530 systemd[1]: Started cri-containerd-31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a.scope. Feb 12 19:42:38.650468 env[1116]: time="2024-02-12T19:42:38.650383201Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:38.658648 env[1116]: time="2024-02-12T19:42:38.658568330Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:38.662830 env[1116]: time="2024-02-12T19:42:38.662747442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:42:38.663234 env[1116]: time="2024-02-12T19:42:38.663167460Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:42:38.667610 env[1116]: time="2024-02-12T19:42:38.667371764Z" level=info msg="CreateContainer within sandbox \"8cf2b225c1cf3ed63d06d7ef0ab813a3deba1d753436b1929b7fae329e1439c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:42:38.691677 env[1116]: time="2024-02-12T19:42:38.691593834Z" level=info msg="CreateContainer within sandbox \"8cf2b225c1cf3ed63d06d7ef0ab813a3deba1d753436b1929b7fae329e1439c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"659a2c34630ba3f97ef1815321819866f2930877bdaa9558cb1d7a7c6ac32f53\"" Feb 12 19:42:38.694399 env[1116]: time="2024-02-12T19:42:38.694256394Z" level=info msg="StartContainer for \"659a2c34630ba3f97ef1815321819866f2930877bdaa9558cb1d7a7c6ac32f53\"" Feb 12 19:42:38.701611 env[1116]: time="2024-02-12T19:42:38.701493970Z" level=info msg="StartContainer for \"31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a\" returns successfully" Feb 12 19:42:38.711775 systemd[1]: cri-containerd-31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a.scope: Deactivated successfully. Feb 12 19:42:38.726660 systemd[1]: Started cri-containerd-659a2c34630ba3f97ef1815321819866f2930877bdaa9558cb1d7a7c6ac32f53.scope. Feb 12 19:42:38.812647 env[1116]: time="2024-02-12T19:42:38.812579929Z" level=info msg="StartContainer for \"659a2c34630ba3f97ef1815321819866f2930877bdaa9558cb1d7a7c6ac32f53\" returns successfully" Feb 12 19:42:38.814325 env[1116]: time="2024-02-12T19:42:38.814249709Z" level=info msg="shim disconnected" id=31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a Feb 12 19:42:38.814627 env[1116]: time="2024-02-12T19:42:38.814596190Z" level=warning msg="cleaning up after shim disconnected" id=31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a namespace=k8s.io Feb 12 19:42:38.814796 env[1116]: time="2024-02-12T19:42:38.814772294Z" level=info msg="cleaning up dead shim" Feb 12 19:42:38.828856 env[1116]: time="2024-02-12T19:42:38.828791019Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3357 runtime=io.containerd.runc.v2\n" Feb 12 19:42:39.169743 kubelet[1407]: E0212 19:42:39.169664 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:39.265138 kubelet[1407]: I0212 19:42:39.265051 1407 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=05a41eff-6a2c-4b51-8a81-3adb5db6fc17 path="/var/lib/kubelet/pods/05a41eff-6a2c-4b51-8a81-3adb5db6fc17/volumes" Feb 12 19:42:39.496540 kubelet[1407]: W0212 19:42:39.496282 1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05a41eff_6a2c_4b51_8a81_3adb5db6fc17.slice/cri-containerd-16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a.scope WatchSource:0}: container "16755bac464b37eae2d8c99e86f393635624fcaea7289da64964454c57cf2a1a" in namespace "k8s.io": not found Feb 12 19:42:39.561749 kubelet[1407]: E0212 19:42:39.561713 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:39.565652 kubelet[1407]: E0212 19:42:39.565588 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:39.569251 env[1116]: time="2024-02-12T19:42:39.569193437Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:42:39.577333 kubelet[1407]: I0212 19:42:39.577265 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-hcrpw" podStartSLOduration=2.102332299 podCreationTimestamp="2024-02-12 19:42:35 +0000 UTC" firstStartedPulling="2024-02-12 19:42:36.188950963 +0000 UTC m=+75.917916842" lastFinishedPulling="2024-02-12 19:42:38.663818738 +0000 UTC m=+78.392784640" observedRunningTime="2024-02-12 19:42:39.577061472 +0000 UTC m=+79.306027373" watchObservedRunningTime="2024-02-12 19:42:39.577200097 +0000 UTC m=+79.306165997" Feb 12 19:42:39.597482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688169172.mount: Deactivated successfully. Feb 12 19:42:39.605583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138512981.mount: Deactivated successfully. Feb 12 19:42:39.616747 env[1116]: time="2024-02-12T19:42:39.616672260Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22\"" Feb 12 19:42:39.618293 env[1116]: time="2024-02-12T19:42:39.618240538Z" level=info msg="StartContainer for \"77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22\"" Feb 12 19:42:39.645259 systemd[1]: Started cri-containerd-77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22.scope. Feb 12 19:42:39.696852 env[1116]: time="2024-02-12T19:42:39.696786456Z" level=info msg="StartContainer for \"77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22\" returns successfully" Feb 12 19:42:39.701668 systemd[1]: cri-containerd-77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22.scope: Deactivated successfully. Feb 12 19:42:39.743111 env[1116]: time="2024-02-12T19:42:39.742958260Z" level=info msg="shim disconnected" id=77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22 Feb 12 19:42:39.743500 env[1116]: time="2024-02-12T19:42:39.743475021Z" level=warning msg="cleaning up after shim disconnected" id=77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22 namespace=k8s.io Feb 12 19:42:39.743622 env[1116]: time="2024-02-12T19:42:39.743606331Z" level=info msg="cleaning up dead shim" Feb 12 19:42:39.756223 env[1116]: time="2024-02-12T19:42:39.756063129Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3417 runtime=io.containerd.runc.v2\n" Feb 12 19:42:40.170295 kubelet[1407]: E0212 19:42:40.170229 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:40.570513 kubelet[1407]: E0212 19:42:40.570477 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:40.572570 kubelet[1407]: E0212 19:42:40.572539 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:40.574000 env[1116]: time="2024-02-12T19:42:40.573917872Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:42:40.593510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095394591.mount: Deactivated successfully. Feb 12 19:42:40.605492 env[1116]: time="2024-02-12T19:42:40.605370957Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017\"" Feb 12 19:42:40.606498 env[1116]: time="2024-02-12T19:42:40.606449032Z" level=info msg="StartContainer for \"ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017\"" Feb 12 19:42:40.629106 systemd[1]: Started cri-containerd-ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017.scope. Feb 12 19:42:40.668140 systemd[1]: cri-containerd-ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017.scope: Deactivated successfully. Feb 12 19:42:40.671278 env[1116]: time="2024-02-12T19:42:40.670740828Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d47683b_db35_4593_94a4_fe868489cffc.slice/cri-containerd-ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017.scope/memory.events\": no such file or directory" Feb 12 19:42:40.675335 env[1116]: time="2024-02-12T19:42:40.675237528Z" level=info msg="StartContainer for \"ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017\" returns successfully" Feb 12 19:42:40.714299 env[1116]: time="2024-02-12T19:42:40.714243258Z" level=info msg="shim disconnected" id=ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017 Feb 12 19:42:40.714647 env[1116]: time="2024-02-12T19:42:40.714614697Z" level=warning msg="cleaning up after shim disconnected" id=ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017 namespace=k8s.io Feb 12 19:42:40.714750 env[1116]: time="2024-02-12T19:42:40.714733393Z" level=info msg="cleaning up dead shim" Feb 12 19:42:40.727495 env[1116]: time="2024-02-12T19:42:40.727428968Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:42:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3472 runtime=io.containerd.runc.v2\n" Feb 12 19:42:41.088846 kubelet[1407]: E0212 19:42:41.088793 1407 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:41.171076 kubelet[1407]: E0212 19:42:41.171007 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:41.237026 kubelet[1407]: E0212 19:42:41.236937 1407 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:42:41.576124 kubelet[1407]: E0212 19:42:41.575742 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:41.579742 env[1116]: time="2024-02-12T19:42:41.579681913Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:42:41.610880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740148412.mount: Deactivated successfully. Feb 12 19:42:41.618230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932704131.mount: Deactivated successfully. Feb 12 19:42:41.631197 env[1116]: time="2024-02-12T19:42:41.631130990Z" level=info msg="CreateContainer within sandbox \"85cf9b32776c3251ebb29d67bcd2d7db9043e8744ad8302c1084bbd2f7f11488\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3\"" Feb 12 19:42:41.633002 env[1116]: time="2024-02-12T19:42:41.632922124Z" level=info msg="StartContainer for \"2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3\"" Feb 12 19:42:41.653555 systemd[1]: Started cri-containerd-2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3.scope. Feb 12 19:42:41.708286 env[1116]: time="2024-02-12T19:42:41.708162831Z" level=info msg="StartContainer for \"2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3\" returns successfully" Feb 12 19:42:42.172031 kubelet[1407]: E0212 19:42:42.171953 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:42.178119 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:42:42.583440 kubelet[1407]: E0212 19:42:42.583400 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:42.612013 kubelet[1407]: W0212 19:42:42.611909 1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d47683b_db35_4593_94a4_fe868489cffc.slice/cri-containerd-b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036.scope WatchSource:0}: task b68088f2455c8b512e6486f200e2e114cae27b169367bef348f3d798db55d036 not found: not found Feb 12 19:42:42.613242 kubelet[1407]: I0212 19:42:42.613205 1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bc26f" podStartSLOduration=5.613170531 podCreationTimestamp="2024-02-12 19:42:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:42:42.611759505 +0000 UTC m=+82.340725405" watchObservedRunningTime="2024-02-12 19:42:42.613170531 +0000 UTC m=+82.342136431" Feb 12 19:42:43.172916 kubelet[1407]: E0212 19:42:43.172862 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:43.938097 kubelet[1407]: E0212 19:42:43.938056 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:44.174024 kubelet[1407]: E0212 19:42:44.173945 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:44.574741 systemd[1]: run-containerd-runc-k8s.io-2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3-runc.cWSak6.mount: Deactivated successfully. Feb 12 19:42:44.603466 kubelet[1407]: I0212 19:42:44.603428 1407 setters.go:548] "Node became not ready" node="146.190.38.70" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:42:44.603353526 +0000 UTC m=+84.332319420 LastTransitionTime:2024-02-12 19:42:44.603353526 +0000 UTC m=+84.332319420 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:42:45.174932 kubelet[1407]: E0212 19:42:45.174871 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:45.724211 kubelet[1407]: W0212 19:42:45.724142 1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d47683b_db35_4593_94a4_fe868489cffc.slice/cri-containerd-31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a.scope WatchSource:0}: task 31bffe7eb7574881c8e91c802bc02cec7616ca4202d477f1d04609b2fdc2338a not found: not found Feb 12 19:42:45.770503 systemd-networkd[1005]: lxc_health: Link UP Feb 12 19:42:45.782286 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:42:45.781831 systemd-networkd[1005]: lxc_health: Gained carrier Feb 12 19:42:45.938689 kubelet[1407]: E0212 19:42:45.938645 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:46.176852 kubelet[1407]: E0212 19:42:46.176799 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:46.597648 kubelet[1407]: E0212 19:42:46.597599 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:46.843249 systemd[1]: run-containerd-runc-k8s.io-2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3-runc.hgpoYf.mount: Deactivated successfully. Feb 12 19:42:47.177151 kubelet[1407]: E0212 19:42:47.177057 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:47.599362 kubelet[1407]: E0212 19:42:47.599329 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:47.735249 systemd-networkd[1005]: lxc_health: Gained IPv6LL Feb 12 19:42:48.177360 kubelet[1407]: E0212 19:42:48.177301 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:48.881936 kubelet[1407]: W0212 19:42:48.881881 1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d47683b_db35_4593_94a4_fe868489cffc.slice/cri-containerd-77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22.scope WatchSource:0}: task 77fbcfd04a9109e6864fd2ef1433409b89be1b22a09d9f008ddae07402f78b22 not found: not found Feb 12 19:42:49.179233 kubelet[1407]: E0212 19:42:49.179065 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:50.180012 kubelet[1407]: E0212 19:42:50.179894 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:51.180847 kubelet[1407]: E0212 19:42:51.180793 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:51.376645 systemd[1]: run-containerd-runc-k8s.io-2f49cf17389ac105ff676c3af8961768f954bd33272015830be65855709d6bd3-runc.HogyCj.mount: Deactivated successfully. Feb 12 19:42:51.993820 kubelet[1407]: W0212 19:42:51.993745 1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d47683b_db35_4593_94a4_fe868489cffc.slice/cri-containerd-ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017.scope WatchSource:0}: task ba5ba12235e952e4864ceefe99f3b9bca5046b3b73652cc2abdda68b00a2f017 not found: not found Feb 12 19:42:52.182592 kubelet[1407]: E0212 19:42:52.182526 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:53.183570 kubelet[1407]: E0212 19:42:53.183495 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:54.184887 kubelet[1407]: E0212 19:42:54.184613 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:42:54.262484 kubelet[1407]: E0212 19:42:54.262409 1407 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:42:55.184967 kubelet[1407]: E0212 19:42:55.184847 1407 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"