Feb 12 19:46:34.968735 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 12 19:46:34.968766 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:46:34.968782 kernel: BIOS-provided physical RAM map: Feb 12 19:46:34.968792 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 19:46:34.968800 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 19:46:34.968809 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 19:46:34.968820 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Feb 12 19:46:34.968830 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Feb 12 19:46:34.968842 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 19:46:34.968851 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 19:46:34.968861 kernel: NX (Execute Disable) protection: active Feb 12 19:46:34.968870 kernel: SMBIOS 2.8 present. Feb 12 19:46:34.968879 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 12 19:46:34.968888 kernel: Hypervisor detected: KVM Feb 12 19:46:34.968900 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 19:46:34.968914 kernel: kvm-clock: cpu 0, msr 2efaa001, primary cpu clock Feb 12 19:46:34.968945 kernel: kvm-clock: using sched offset of 5813372578 cycles Feb 12 19:46:34.968957 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 19:46:34.968967 kernel: tsc: Detected 2494.140 MHz processor Feb 12 19:46:34.968978 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:46:34.968989 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:46:34.968999 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Feb 12 19:46:34.969010 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:46:34.969054 kernel: ACPI: Early table checksum verification disabled Feb 12 19:46:34.969065 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Feb 12 19:46:34.969076 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969086 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969097 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969107 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 19:46:34.969117 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969128 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969138 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969152 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 19:46:34.969163 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 12 19:46:34.969173 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 12 19:46:34.969185 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 19:46:34.969196 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 12 19:46:34.969206 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 12 19:46:34.969217 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 12 19:46:34.969227 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 12 19:46:34.969246 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:46:34.969258 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:46:34.969269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 19:46:34.969281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 12 19:46:34.969292 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Feb 12 19:46:34.969304 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Feb 12 19:46:34.969318 kernel: Zone ranges: Feb 12 19:46:34.969330 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:46:34.969341 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Feb 12 19:46:34.969352 kernel: Normal empty Feb 12 19:46:34.969363 kernel: Movable zone start for each node Feb 12 19:46:34.969374 kernel: Early memory node ranges Feb 12 19:46:34.969385 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 19:46:34.969397 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Feb 12 19:46:34.969408 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Feb 12 19:46:34.969422 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:46:34.969434 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 19:46:34.969446 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Feb 12 19:46:34.969457 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 19:46:34.969468 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 19:46:34.969479 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:46:34.969491 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 19:46:34.969502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 19:46:34.969513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:46:34.969528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 19:46:34.969539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 19:46:34.969550 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:46:34.969561 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 19:46:34.969573 kernel: TSC deadline timer available Feb 12 19:46:34.969584 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:46:34.969595 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 19:46:34.969606 kernel: Booting paravirtualized kernel on KVM Feb 12 19:46:34.969618 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:46:34.969632 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:46:34.969643 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:46:34.969654 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:46:34.969665 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:46:34.969676 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 19:46:34.969687 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 19:46:34.969698 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Feb 12 19:46:34.969709 kernel: Policy zone: DMA32 Feb 12 19:46:34.969722 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:46:34.969737 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:46:34.969748 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 19:46:34.969760 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:46:34.969771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:46:34.969783 kernel: Memory: 1975320K/2096600K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 19:46:34.969794 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:46:34.969805 kernel: Kernel/User page tables isolation: enabled Feb 12 19:46:34.969816 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:46:34.969831 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:46:34.969842 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:46:34.969855 kernel: rcu: RCU event tracing is enabled. Feb 12 19:46:34.969866 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:46:34.969877 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:46:34.969889 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:46:34.969900 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:46:34.969911 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:46:34.969922 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 19:46:34.969936 kernel: random: crng init done Feb 12 19:46:34.969947 kernel: Console: colour VGA+ 80x25 Feb 12 19:46:34.969959 kernel: printk: console [tty0] enabled Feb 12 19:46:34.969970 kernel: printk: console [ttyS0] enabled Feb 12 19:46:34.969981 kernel: ACPI: Core revision 20210730 Feb 12 19:46:34.969993 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 19:46:34.970005 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:46:34.970016 kernel: x2apic enabled Feb 12 19:46:34.970049 kernel: Switched APIC routing to physical x2apic. Feb 12 19:46:34.970064 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 19:46:34.970076 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Feb 12 19:46:34.970087 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Feb 12 19:46:34.970099 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 19:46:34.970110 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 19:46:34.970121 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:46:34.970133 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:46:34.970144 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:46:34.970156 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:46:34.970170 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 12 19:46:34.970192 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 19:46:34.970204 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 19:46:34.970219 kernel: MDS: Mitigation: Clear CPU buffers Feb 12 19:46:34.970231 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:46:34.970243 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:46:34.970255 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:46:34.970267 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:46:34.970279 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:46:34.970292 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 19:46:34.970307 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:46:34.970319 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:46:34.970331 kernel: LSM: Security Framework initializing Feb 12 19:46:34.970366 kernel: SELinux: Initializing. Feb 12 19:46:34.970378 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:46:34.970391 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 19:46:34.970407 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x3f, stepping: 0x2) Feb 12 19:46:34.970420 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 12 19:46:34.970432 kernel: signal: max sigframe size: 1776 Feb 12 19:46:34.970443 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:46:34.970455 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:46:34.970467 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:46:34.970493 kernel: x86: Booting SMP configuration: Feb 12 19:46:34.970504 kernel: .... node #0, CPUs: #1 Feb 12 19:46:34.970516 kernel: kvm-clock: cpu 1, msr 2efaa041, secondary cpu clock Feb 12 19:46:34.970528 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 19:46:34.970544 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:46:34.970556 kernel: smpboot: Max logical packages: 1 Feb 12 19:46:34.970568 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Feb 12 19:46:34.970580 kernel: devtmpfs: initialized Feb 12 19:46:34.970592 kernel: x86/mm: Memory block size: 128MB Feb 12 19:46:34.970604 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:46:34.970617 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:46:34.970629 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:46:34.970640 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:46:34.970656 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:46:34.970668 kernel: audit: type=2000 audit(1707767193.549:1): state=initialized audit_enabled=0 res=1 Feb 12 19:46:34.970680 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:46:34.970691 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:46:34.970703 kernel: cpuidle: using governor menu Feb 12 19:46:34.970716 kernel: ACPI: bus type PCI registered Feb 12 19:46:34.970727 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:46:34.970740 kernel: dca service started, version 1.12.1 Feb 12 19:46:34.970752 kernel: PCI: Using configuration type 1 for base access Feb 12 19:46:34.970767 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:46:34.970780 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:46:34.970792 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:46:34.970804 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:46:34.970815 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:46:34.970843 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:46:34.970855 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:46:34.970868 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:46:34.970881 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:46:34.970899 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:46:34.970912 kernel: ACPI: Interpreter enabled Feb 12 19:46:34.970925 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:46:34.970936 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:46:34.970950 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:46:34.970963 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 19:46:34.971007 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 19:46:34.979469 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 19:46:34.979624 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 19:46:34.979639 kernel: acpiphp: Slot [3] registered Feb 12 19:46:34.979650 kernel: acpiphp: Slot [4] registered Feb 12 19:46:34.979664 kernel: acpiphp: Slot [5] registered Feb 12 19:46:34.979674 kernel: acpiphp: Slot [6] registered Feb 12 19:46:34.979683 kernel: acpiphp: Slot [7] registered Feb 12 19:46:34.979692 kernel: acpiphp: Slot [8] registered Feb 12 19:46:34.979705 kernel: acpiphp: Slot [9] registered Feb 12 19:46:34.979723 kernel: acpiphp: Slot [10] registered Feb 12 19:46:34.979736 kernel: acpiphp: Slot [11] registered Feb 12 19:46:34.979751 kernel: acpiphp: Slot [12] registered Feb 12 19:46:34.979765 kernel: acpiphp: Slot [13] registered Feb 12 19:46:34.979780 kernel: acpiphp: Slot [14] registered Feb 12 19:46:34.979794 kernel: acpiphp: Slot [15] registered Feb 12 19:46:34.979804 kernel: acpiphp: Slot [16] registered Feb 12 19:46:34.979813 kernel: acpiphp: Slot [17] registered Feb 12 19:46:34.979824 kernel: acpiphp: Slot [18] registered Feb 12 19:46:34.979833 kernel: acpiphp: Slot [19] registered Feb 12 19:46:34.979846 kernel: acpiphp: Slot [20] registered Feb 12 19:46:34.979855 kernel: acpiphp: Slot [21] registered Feb 12 19:46:34.979864 kernel: acpiphp: Slot [22] registered Feb 12 19:46:34.979876 kernel: acpiphp: Slot [23] registered Feb 12 19:46:34.979892 kernel: acpiphp: Slot [24] registered Feb 12 19:46:34.979918 kernel: acpiphp: Slot [25] registered Feb 12 19:46:34.979932 kernel: acpiphp: Slot [26] registered Feb 12 19:46:34.979947 kernel: acpiphp: Slot [27] registered Feb 12 19:46:34.979967 kernel: acpiphp: Slot [28] registered Feb 12 19:46:34.979986 kernel: acpiphp: Slot [29] registered Feb 12 19:46:34.980000 kernel: acpiphp: Slot [30] registered Feb 12 19:46:34.980014 kernel: acpiphp: Slot [31] registered Feb 12 19:46:34.980028 kernel: PCI host bridge to bus 0000:00 Feb 12 19:46:34.980278 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 19:46:34.980424 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 19:46:34.980572 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 19:46:34.980731 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 19:46:34.980877 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 19:46:34.981072 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 19:46:34.981338 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 19:46:34.981526 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 19:46:34.981687 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 19:46:34.981853 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 12 19:46:34.982021 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 19:46:34.982206 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 19:46:34.982337 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 19:46:34.982458 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 19:46:34.982585 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 12 19:46:34.982706 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 12 19:46:34.982813 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 19:46:34.982987 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 19:46:34.983265 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 19:46:34.983428 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 19:46:34.983584 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 19:46:34.983746 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 19:46:34.983893 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 12 19:46:34.984005 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 19:46:34.984139 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 19:46:34.984266 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:46:34.984393 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 12 19:46:34.984543 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 12 19:46:34.984666 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 19:46:34.984835 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 12 19:46:34.985079 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 12 19:46:34.985252 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 12 19:46:34.985356 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 19:46:34.985518 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 12 19:46:34.985680 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 12 19:46:34.985843 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 12 19:46:34.986004 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 19:46:34.986204 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:46:34.986336 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 19:46:34.986491 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 12 19:46:34.986637 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 19:46:34.986807 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 12 19:46:34.986981 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 12 19:46:34.994285 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 12 19:46:34.994495 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 12 19:46:34.994636 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 19:46:34.994774 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 12 19:46:34.994917 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 12 19:46:34.994937 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 19:46:34.994952 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 19:46:34.994983 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 19:46:34.995017 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 19:46:34.995048 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 19:46:34.995063 kernel: iommu: Default domain type: Translated Feb 12 19:46:34.995076 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:46:34.995258 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 19:46:34.995373 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 19:46:34.995508 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 19:46:34.995522 kernel: vgaarb: loaded Feb 12 19:46:34.995542 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:46:34.995552 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:46:34.995562 kernel: PTP clock support registered Feb 12 19:46:34.995571 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:46:34.995581 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 19:46:34.995596 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 19:46:34.995605 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Feb 12 19:46:34.995615 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 19:46:34.995624 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 19:46:34.995636 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 19:46:34.995646 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:46:34.995656 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:46:34.995665 kernel: pnp: PnP ACPI init Feb 12 19:46:34.995675 kernel: pnp: PnP ACPI: found 4 devices Feb 12 19:46:34.995684 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:46:34.995693 kernel: NET: Registered PF_INET protocol family Feb 12 19:46:34.995703 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 19:46:34.995712 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 19:46:34.995724 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:46:34.995734 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:46:34.995743 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 19:46:34.995753 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 19:46:34.995762 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:46:34.995771 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 19:46:34.995781 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:46:34.995790 kernel: NET: Registered PF_XDP protocol family Feb 12 19:46:34.995912 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 19:46:34.996007 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 19:46:34.996129 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 19:46:34.996222 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 19:46:34.996324 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 19:46:34.996427 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 19:46:34.996583 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 19:46:34.996718 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 19:46:34.996746 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 19:46:34.996862 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x730 took 51819 usecs Feb 12 19:46:34.996882 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:46:34.996896 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:46:34.996906 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Feb 12 19:46:34.996916 kernel: Initialise system trusted keyrings Feb 12 19:46:34.996926 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 19:46:34.996935 kernel: Key type asymmetric registered Feb 12 19:46:34.996945 kernel: Asymmetric key parser 'x509' registered Feb 12 19:46:34.996959 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:46:34.996968 kernel: io scheduler mq-deadline registered Feb 12 19:46:34.996978 kernel: io scheduler kyber registered Feb 12 19:46:34.996987 kernel: io scheduler bfq registered Feb 12 19:46:34.996996 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:46:34.997006 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 19:46:34.997015 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 19:46:34.997038 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 19:46:34.997047 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:46:34.997057 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:46:34.997069 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 19:46:34.997079 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 19:46:34.997088 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 19:46:34.997098 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 19:46:34.997244 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 12 19:46:34.997355 kernel: rtc_cmos 00:03: registered as rtc0 Feb 12 19:46:34.997483 kernel: rtc_cmos 00:03: setting system clock to 2024-02-12T19:46:34 UTC (1707767194) Feb 12 19:46:34.997597 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 12 19:46:34.997614 kernel: intel_pstate: CPU model not supported Feb 12 19:46:34.997627 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:46:34.997641 kernel: Segment Routing with IPv6 Feb 12 19:46:34.997655 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:46:34.997699 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:46:34.997708 kernel: Key type dns_resolver registered Feb 12 19:46:34.997718 kernel: IPI shorthand broadcast: enabled Feb 12 19:46:34.997731 kernel: sched_clock: Marking stable (1108004257, 123729122)->(1454668397, -222935018) Feb 12 19:46:34.997749 kernel: registered taskstats version 1 Feb 12 19:46:34.997763 kernel: Loading compiled-in X.509 certificates Feb 12 19:46:34.997777 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 12 19:46:34.997787 kernel: Key type .fscrypt registered Feb 12 19:46:34.997796 kernel: Key type fscrypt-provisioning registered Feb 12 19:46:34.997807 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:46:34.997822 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:46:34.997832 kernel: ima: No architecture policies found Feb 12 19:46:34.997841 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:46:34.997853 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:46:34.997863 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:46:34.997872 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:46:34.997881 kernel: Run /init as init process Feb 12 19:46:34.997892 kernel: with arguments: Feb 12 19:46:34.997906 kernel: /init Feb 12 19:46:34.997944 kernel: with environment: Feb 12 19:46:34.997962 kernel: HOME=/ Feb 12 19:46:34.997976 kernel: TERM=linux Feb 12 19:46:34.998019 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:46:34.998068 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:46:34.998098 systemd[1]: Detected virtualization kvm. Feb 12 19:46:34.998114 systemd[1]: Detected architecture x86-64. Feb 12 19:46:34.998124 systemd[1]: Running in initrd. Feb 12 19:46:34.998134 systemd[1]: No hostname configured, using default hostname. Feb 12 19:46:34.998148 systemd[1]: Hostname set to . Feb 12 19:46:34.998167 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:46:34.998182 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:46:34.998197 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:46:34.998211 systemd[1]: Reached target cryptsetup.target. Feb 12 19:46:34.998221 systemd[1]: Reached target paths.target. Feb 12 19:46:34.998231 systemd[1]: Reached target slices.target. Feb 12 19:46:34.998240 systemd[1]: Reached target swap.target. Feb 12 19:46:34.998250 systemd[1]: Reached target timers.target. Feb 12 19:46:34.998264 systemd[1]: Listening on iscsid.socket. Feb 12 19:46:34.998274 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:46:34.998284 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:46:34.998294 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:46:34.998304 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:46:34.998314 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:46:34.998324 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:46:34.998334 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:46:34.998347 systemd[1]: Reached target sockets.target. Feb 12 19:46:34.998357 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:46:34.998368 systemd[1]: Finished network-cleanup.service. Feb 12 19:46:34.998381 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:46:34.998391 systemd[1]: Starting systemd-journald.service... Feb 12 19:46:34.998401 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:46:34.998414 systemd[1]: Starting systemd-resolved.service... Feb 12 19:46:34.998424 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:46:34.998434 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:46:34.998444 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:46:34.998454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:46:34.998473 systemd-journald[184]: Journal started Feb 12 19:46:34.998556 systemd-journald[184]: Runtime Journal (/run/log/journal/d73e4982498646bb885d861ec19375c6) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:46:34.966630 systemd-modules-load[185]: Inserted module 'overlay' Feb 12 19:46:35.036774 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:46:35.036817 kernel: Bridge firewalling registered Feb 12 19:46:35.036836 systemd[1]: Started systemd-journald.service. Feb 12 19:46:35.000099 systemd-resolved[186]: Positive Trust Anchors: Feb 12 19:46:35.000108 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:46:35.000145 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:46:35.054411 kernel: audit: type=1130 audit(1707767195.042:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.054452 kernel: audit: type=1130 audit(1707767195.046:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.054475 kernel: SCSI subsystem initialized Feb 12 19:46:35.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.005330 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 12 19:46:35.022762 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 12 19:46:35.062105 kernel: audit: type=1130 audit(1707767195.054:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.046103 systemd[1]: Started systemd-resolved.service. Feb 12 19:46:35.050665 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:46:35.057738 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:46:35.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.069114 kernel: audit: type=1130 audit(1707767195.064:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.069169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:46:35.069329 systemd[1]: Reached target nss-lookup.target. Feb 12 19:46:35.076176 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:46:35.076230 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:46:35.076281 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 12 19:46:35.078404 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:46:35.096215 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:46:35.110077 kernel: audit: type=1130 audit(1707767195.100:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.106273 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:46:35.119631 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:46:35.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.124050 kernel: audit: type=1130 audit(1707767195.120:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.124358 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:46:35.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.126712 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:46:35.140258 kernel: audit: type=1130 audit(1707767195.125:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.155533 dracut-cmdline[207]: dracut-dracut-053 Feb 12 19:46:35.159663 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 12 19:46:35.272064 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:46:35.289071 kernel: iscsi: registered transport (tcp) Feb 12 19:46:35.319093 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:46:35.319189 kernel: QLogic iSCSI HBA Driver Feb 12 19:46:35.383880 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:46:35.389082 kernel: audit: type=1130 audit(1707767195.384:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.385902 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:46:35.455123 kernel: raid6: avx2x4 gen() 13841 MB/s Feb 12 19:46:35.472245 kernel: raid6: avx2x4 xor() 8044 MB/s Feb 12 19:46:35.489108 kernel: raid6: avx2x2 gen() 13756 MB/s Feb 12 19:46:35.506135 kernel: raid6: avx2x2 xor() 14906 MB/s Feb 12 19:46:35.523122 kernel: raid6: avx2x1 gen() 10294 MB/s Feb 12 19:46:35.540121 kernel: raid6: avx2x1 xor() 13114 MB/s Feb 12 19:46:35.557123 kernel: raid6: sse2x4 gen() 9851 MB/s Feb 12 19:46:35.574109 kernel: raid6: sse2x4 xor() 5641 MB/s Feb 12 19:46:35.591110 kernel: raid6: sse2x2 gen() 8922 MB/s Feb 12 19:46:35.608165 kernel: raid6: sse2x2 xor() 5327 MB/s Feb 12 19:46:35.625105 kernel: raid6: sse2x1 gen() 7398 MB/s Feb 12 19:46:35.646963 kernel: raid6: sse2x1 xor() 4626 MB/s Feb 12 19:46:35.647071 kernel: raid6: using algorithm avx2x4 gen() 13841 MB/s Feb 12 19:46:35.647093 kernel: raid6: .... xor() 8044 MB/s, rmw enabled Feb 12 19:46:35.647939 kernel: raid6: using avx2x2 recovery algorithm Feb 12 19:46:35.667102 kernel: xor: automatically using best checksumming function avx Feb 12 19:46:35.843022 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:46:35.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.865146 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:46:35.866000 audit: BPF prog-id=7 op=LOAD Feb 12 19:46:35.866000 audit: BPF prog-id=8 op=LOAD Feb 12 19:46:35.870493 systemd[1]: Starting systemd-udevd.service... Feb 12 19:46:35.872963 kernel: audit: type=1130 audit(1707767195.865:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.891753 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 19:46:35.913160 systemd[1]: Started systemd-udevd.service. Feb 12 19:46:35.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:35.918648 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:46:35.952693 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Feb 12 19:46:36.023895 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:46:36.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:36.025904 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:46:36.107945 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:46:36.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:36.200093 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 12 19:46:36.221827 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 19:46:36.221929 kernel: GPT:9289727 != 125829119 Feb 12 19:46:36.221953 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 19:46:36.221990 kernel: GPT:9289727 != 125829119 Feb 12 19:46:36.222011 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 19:46:36.222057 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:46:36.224066 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:46:36.254519 kernel: virtio_blk virtio5: [vdb] 952 512-byte logical blocks (487 kB/476 KiB) Feb 12 19:46:36.263970 kernel: scsi host0: Virtio SCSI HBA Feb 12 19:46:36.296838 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:46:36.296945 kernel: AES CTR mode by8 optimization enabled Feb 12 19:46:36.331253 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:46:36.430003 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (431) Feb 12 19:46:36.430072 kernel: ACPI: bus type USB registered Feb 12 19:46:36.430106 kernel: libata version 3.00 loaded. Feb 12 19:46:36.430124 kernel: usbcore: registered new interface driver usbfs Feb 12 19:46:36.430137 kernel: usbcore: registered new interface driver hub Feb 12 19:46:36.430150 kernel: usbcore: registered new device driver usb Feb 12 19:46:36.430173 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 19:46:36.430424 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Feb 12 19:46:36.430446 kernel: scsi host1: ata_piix Feb 12 19:46:36.430670 kernel: ehci-pci: EHCI PCI platform driver Feb 12 19:46:36.430686 kernel: scsi host2: ata_piix Feb 12 19:46:36.431157 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 12 19:46:36.431180 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 12 19:46:36.431196 kernel: uhci_hcd: USB Universal Host Controller Interface driver Feb 12 19:46:36.428775 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:46:36.439810 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:46:36.453278 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:46:36.458069 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 12 19:46:36.458414 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 12 19:46:36.459525 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 12 19:46:36.460915 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Feb 12 19:46:36.463159 kernel: hub 1-0:1.0: USB hub found Feb 12 19:46:36.463583 kernel: hub 1-0:1.0: 2 ports detected Feb 12 19:46:36.463406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:46:36.466435 systemd[1]: Starting disk-uuid.service... Feb 12 19:46:36.477336 disk-uuid[496]: Primary Header is updated. Feb 12 19:46:36.477336 disk-uuid[496]: Secondary Entries is updated. Feb 12 19:46:36.477336 disk-uuid[496]: Secondary Header is updated. Feb 12 19:46:36.492455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:46:36.501083 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:46:37.520071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 19:46:37.520936 disk-uuid[503]: The operation has completed successfully. Feb 12 19:46:37.598623 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:46:37.600221 systemd[1]: Finished disk-uuid.service. Feb 12 19:46:37.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:37.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:37.603621 systemd[1]: Starting verity-setup.service... Feb 12 19:46:37.645114 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:46:37.756281 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:46:37.760920 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:46:37.763000 systemd[1]: Finished verity-setup.service. Feb 12 19:46:37.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:37.910075 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:46:37.911591 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:46:37.913067 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:46:37.921238 systemd[1]: Starting ignition-setup.service... Feb 12 19:46:37.924245 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:46:37.961389 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:46:37.961519 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:46:37.961542 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:46:37.994289 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:46:38.016319 systemd[1]: Finished ignition-setup.service. Feb 12 19:46:38.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.020107 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:46:38.226311 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:46:38.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.229000 audit: BPF prog-id=9 op=LOAD Feb 12 19:46:38.231375 systemd[1]: Starting systemd-networkd.service... Feb 12 19:46:38.289517 systemd-networkd[688]: lo: Link UP Feb 12 19:46:38.289532 systemd-networkd[688]: lo: Gained carrier Feb 12 19:46:38.291318 systemd-networkd[688]: Enumeration completed Feb 12 19:46:38.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.291846 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:46:38.292254 systemd[1]: Started systemd-networkd.service. Feb 12 19:46:38.293133 systemd[1]: Reached target network.target. Feb 12 19:46:38.299791 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 12 19:46:38.302170 systemd-networkd[688]: eth1: Link UP Feb 12 19:46:38.302178 systemd-networkd[688]: eth1: Gained carrier Feb 12 19:46:38.303522 systemd[1]: Starting iscsiuio.service... Feb 12 19:46:38.312806 systemd-networkd[688]: eth0: Link UP Feb 12 19:46:38.315793 systemd-networkd[688]: eth0: Gained carrier Feb 12 19:46:38.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.338698 systemd[1]: Started iscsiuio.service. Feb 12 19:46:38.343453 systemd[1]: Starting iscsid.service... Feb 12 19:46:38.346652 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Feb 12 19:46:38.352442 systemd-networkd[688]: eth0: DHCPv4 address 64.23.171.188/20, gateway 64.23.160.1 acquired from 169.254.169.253 Feb 12 19:46:38.353598 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:46:38.353598 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:46:38.353598 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:46:38.353598 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:46:38.353598 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:46:38.353598 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:46:38.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.361303 systemd[1]: Started iscsid.service. Feb 12 19:46:38.377407 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:46:38.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.400883 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:46:38.401843 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:46:38.402404 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:46:38.405131 ignition[606]: Ignition 2.14.0 Feb 12 19:46:38.405172 ignition[606]: Stage: fetch-offline Feb 12 19:46:38.405933 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:38.405989 ignition[606]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:38.408257 systemd[1]: Reached target remote-fs.target. Feb 12 19:46:38.411289 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:46:38.414237 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:38.414414 ignition[606]: parsed url from cmdline: "" Feb 12 19:46:38.414420 ignition[606]: no config URL provided Feb 12 19:46:38.414428 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:46:38.414441 ignition[606]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:46:38.414450 ignition[606]: failed to fetch config: resource requires networking Feb 12 19:46:38.414619 ignition[606]: Ignition finished successfully Feb 12 19:46:38.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.421739 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:46:38.426364 systemd[1]: Starting ignition-fetch.service... Feb 12 19:46:38.438924 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:46:38.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.450772 ignition[704]: Ignition 2.14.0 Feb 12 19:46:38.452207 ignition[704]: Stage: fetch Feb 12 19:46:38.452449 ignition[704]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:38.452474 ignition[704]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:38.454696 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:38.455135 ignition[704]: parsed url from cmdline: "" Feb 12 19:46:38.455144 ignition[704]: no config URL provided Feb 12 19:46:38.455156 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:46:38.455185 ignition[704]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:46:38.455240 ignition[704]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 12 19:46:38.496994 ignition[704]: GET result: OK Feb 12 19:46:38.497202 ignition[704]: parsing config with SHA512: 9591edbdcf69980ff46c9ab57d072b1cc36ca1247900b3176127a12b298760d986376d1cb86f2bf020a57f4df379fe6ccc1638f50ed51a01cb98efee07237b92 Feb 12 19:46:38.547174 unknown[704]: fetched base config from "system" Feb 12 19:46:38.548132 unknown[704]: fetched base config from "system" Feb 12 19:46:38.548847 unknown[704]: fetched user config from "digitalocean" Feb 12 19:46:38.551093 ignition[704]: fetch: fetch complete Feb 12 19:46:38.551949 ignition[704]: fetch: fetch passed Feb 12 19:46:38.552115 ignition[704]: Ignition finished successfully Feb 12 19:46:38.554651 systemd[1]: Finished ignition-fetch.service. Feb 12 19:46:38.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.557828 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 12 19:46:38.557886 kernel: audit: type=1130 audit(1707767198.555:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.558571 systemd[1]: Starting ignition-kargs.service... Feb 12 19:46:38.576527 ignition[713]: Ignition 2.14.0 Feb 12 19:46:38.576546 ignition[713]: Stage: kargs Feb 12 19:46:38.576760 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:38.576791 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:38.579828 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:38.582990 ignition[713]: kargs: kargs passed Feb 12 19:46:38.583250 ignition[713]: Ignition finished successfully Feb 12 19:46:38.585084 systemd[1]: Finished ignition-kargs.service. Feb 12 19:46:38.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.587877 systemd[1]: Starting ignition-disks.service... Feb 12 19:46:38.593794 kernel: audit: type=1130 audit(1707767198.585:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.604125 ignition[719]: Ignition 2.14.0 Feb 12 19:46:38.611388 systemd[1]: Finished ignition-disks.service. Feb 12 19:46:38.617179 kernel: audit: type=1130 audit(1707767198.612:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.604140 ignition[719]: Stage: disks Feb 12 19:46:38.612397 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:46:38.604390 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:38.616388 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:46:38.604425 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:38.617676 systemd[1]: Reached target local-fs.target. Feb 12 19:46:38.607408 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:38.619544 systemd[1]: Reached target sysinit.target. Feb 12 19:46:38.609518 ignition[719]: disks: disks passed Feb 12 19:46:38.620780 systemd[1]: Reached target basic.target. Feb 12 19:46:38.609627 ignition[719]: Ignition finished successfully Feb 12 19:46:38.623709 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:46:38.666397 systemd-fsck[726]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 19:46:38.675488 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:46:38.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.684996 kernel: audit: type=1130 audit(1707767198.678:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:38.681704 systemd[1]: Mounting sysroot.mount... Feb 12 19:46:38.702064 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:46:38.704833 systemd[1]: Mounted sysroot.mount. Feb 12 19:46:38.705650 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:46:38.711016 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:46:38.713512 systemd[1]: Starting flatcar-digitalocean-network.service... Feb 12 19:46:38.716934 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:46:38.717841 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:46:38.717924 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:46:38.729483 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:46:38.737434 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:46:38.758097 initrd-setup-root[738]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:46:38.784171 initrd-setup-root[746]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:46:38.795418 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:46:38.827958 initrd-setup-root[755]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:46:38.839127 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (752) Feb 12 19:46:38.844438 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:46:38.844564 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:46:38.844591 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:46:38.850474 initrd-setup-root[765]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:46:38.876195 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:46:39.004609 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:46:39.013382 kernel: audit: type=1130 audit(1707767199.005:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.013669 coreos-metadata[732]: Feb 12 19:46:39.011 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:46:39.009051 systemd[1]: Starting ignition-mount.service... Feb 12 19:46:39.012532 systemd[1]: Starting sysroot-boot.service... Feb 12 19:46:39.034290 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:46:39.034482 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:46:39.042763 coreos-metadata[732]: Feb 12 19:46:39.042 INFO Fetch successful Feb 12 19:46:39.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.055046 kernel: audit: type=1130 audit(1707767199.051:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.050733 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 12 19:46:39.050971 systemd[1]: Finished flatcar-digitalocean-network.service. Feb 12 19:46:39.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.070190 kernel: audit: type=1131 audit(1707767199.051:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.080506 coreos-metadata[733]: Feb 12 19:46:39.080 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:46:39.090419 ignition[804]: INFO : Ignition 2.14.0 Feb 12 19:46:39.090419 ignition[804]: INFO : Stage: mount Feb 12 19:46:39.090419 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:39.090419 ignition[804]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:39.093938 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:39.095734 ignition[804]: INFO : mount: mount passed Feb 12 19:46:39.095734 ignition[804]: INFO : Ignition finished successfully Feb 12 19:46:39.097052 coreos-metadata[733]: Feb 12 19:46:39.094 INFO Fetch successful Feb 12 19:46:39.104560 kernel: audit: type=1130 audit(1707767199.099:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.098609 systemd[1]: Finished ignition-mount.service. Feb 12 19:46:39.105310 systemd[1]: Finished sysroot-boot.service. Feb 12 19:46:39.110455 kernel: audit: type=1130 audit(1707767199.106:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.110568 coreos-metadata[733]: Feb 12 19:46:39.110 INFO wrote hostname ci-3510.3.2-3-7482959a87 to /sysroot/etc/hostname Feb 12 19:46:39.111909 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:46:39.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.117200 kernel: audit: type=1130 audit(1707767199.112:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:39.114381 systemd[1]: Starting ignition-files.service... Feb 12 19:46:39.129858 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:46:39.151262 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Feb 12 19:46:39.156017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:46:39.156142 kernel: BTRFS info (device vda6): using free space tree Feb 12 19:46:39.156156 kernel: BTRFS info (device vda6): has skinny extents Feb 12 19:46:39.167256 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:46:39.198895 ignition[832]: INFO : Ignition 2.14.0 Feb 12 19:46:39.198895 ignition[832]: INFO : Stage: files Feb 12 19:46:39.198895 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:39.198895 ignition[832]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:39.198895 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:39.206217 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:46:39.208601 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:46:39.208601 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:46:39.217452 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:46:39.218705 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:46:39.220001 unknown[832]: wrote ssh authorized keys file for user: core Feb 12 19:46:39.221414 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:46:39.221414 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:46:39.221414 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 19:46:39.221414 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:46:39.226892 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 19:46:39.783791 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:46:39.856558 systemd-networkd[688]: eth1: Gained IPv6LL Feb 12 19:46:40.173323 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 19:46:40.173323 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 19:46:40.171755 systemd-networkd[688]: eth0: Gained IPv6LL Feb 12 19:46:40.177799 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:46:40.177799 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:46:40.609867 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:46:40.901226 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 19:46:40.901226 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 19:46:40.904652 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:46:40.906477 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:46:40.985440 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:46:41.445265 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 19:46:41.446970 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:46:41.448317 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:46:41.449376 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:46:41.505472 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:46:42.958661 ignition[832]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 19:46:42.958661 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:46:42.958661 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:46:42.958661 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:46:42.958661 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:46:42.958661 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:46:42.986524 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:46:42.986524 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(e): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(e): op(f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(e): op(f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(e): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(10): [started] processing unit "prepare-critools.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(10): op(11): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(10): op(11): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(10): [finished] processing unit "prepare-critools.service" Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:46:42.986524 ignition[832]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 19:46:43.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.059435 ignition[832]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:46:43.059435 ignition[832]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:46:43.059435 ignition[832]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:46:43.059435 ignition[832]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:46:43.059435 ignition[832]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:46:43.059435 ignition[832]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:46:43.059435 ignition[832]: INFO : files: files passed Feb 12 19:46:43.059435 ignition[832]: INFO : Ignition finished successfully Feb 12 19:46:43.001309 systemd[1]: Finished ignition-files.service. Feb 12 19:46:43.007984 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:46:43.067783 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:46:43.024392 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:46:43.026085 systemd[1]: Starting ignition-quench.service... Feb 12 19:46:43.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.044218 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:46:43.044380 systemd[1]: Finished ignition-quench.service. Feb 12 19:46:43.084577 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:46:43.085614 systemd[1]: Reached target ignition-complete.target. Feb 12 19:46:43.088355 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:46:43.145057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:46:43.146050 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:46:43.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.148152 systemd[1]: Reached target initrd-fs.target. Feb 12 19:46:43.149385 systemd[1]: Reached target initrd.target. Feb 12 19:46:43.150698 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:46:43.161124 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:46:43.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.209351 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:46:43.214222 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:46:43.253375 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:46:43.254207 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:46:43.255425 systemd[1]: Stopped target timers.target. Feb 12 19:46:43.290895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:46:43.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.291131 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:46:43.299117 systemd[1]: Stopped target initrd.target. Feb 12 19:46:43.300320 systemd[1]: Stopped target basic.target. Feb 12 19:46:43.301570 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:46:43.303926 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:46:43.304664 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:46:43.305469 systemd[1]: Stopped target remote-fs.target. Feb 12 19:46:43.306604 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:46:43.314071 systemd[1]: Stopped target sysinit.target. Feb 12 19:46:43.315301 systemd[1]: Stopped target local-fs.target. Feb 12 19:46:43.316706 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:46:43.318533 systemd[1]: Stopped target swap.target. Feb 12 19:46:43.324444 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:46:43.330230 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:46:43.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.334581 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:46:43.335306 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:46:43.337268 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:46:43.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.338349 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:46:43.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.338596 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:46:43.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.339865 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:46:43.340127 systemd[1]: Stopped ignition-files.service. Feb 12 19:46:43.340935 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:46:43.341245 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:46:43.343981 systemd[1]: Stopping ignition-mount.service... Feb 12 19:46:43.352083 iscsid[693]: iscsid shutting down. Feb 12 19:46:43.353695 systemd[1]: Stopping iscsid.service... Feb 12 19:46:43.356769 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:46:43.358222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:46:43.359550 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:46:43.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.361404 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:46:43.362552 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:46:43.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.364952 ignition[870]: INFO : Ignition 2.14.0 Feb 12 19:46:43.364952 ignition[870]: INFO : Stage: umount Feb 12 19:46:43.367177 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:46:43.367177 ignition[870]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Feb 12 19:46:43.370231 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 12 19:46:43.372457 ignition[870]: INFO : umount: umount passed Feb 12 19:46:43.372457 ignition[870]: INFO : Ignition finished successfully Feb 12 19:46:43.376563 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:46:43.377427 systemd[1]: Stopped iscsid.service. Feb 12 19:46:43.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.380581 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:46:43.382689 systemd[1]: Stopped ignition-mount.service. Feb 12 19:46:43.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.392310 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:46:43.393499 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:46:43.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.397163 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:46:43.398225 systemd[1]: Stopped ignition-disks.service. Feb 12 19:46:43.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.403292 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:46:43.406991 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:46:43.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.413710 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:46:43.413807 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:46:43.414602 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:46:43.414682 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:46:43.415269 systemd[1]: Stopped target paths.target. Feb 12 19:46:43.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.415883 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:46:43.419146 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:46:43.419809 systemd[1]: Stopped target slices.target. Feb 12 19:46:43.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.420332 systemd[1]: Stopped target sockets.target. Feb 12 19:46:43.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.420846 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:46:43.420925 systemd[1]: Closed iscsid.socket. Feb 12 19:46:43.421437 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:46:43.421516 systemd[1]: Stopped ignition-setup.service. Feb 12 19:46:43.422315 systemd[1]: Stopping iscsiuio.service... Feb 12 19:46:43.434659 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:46:43.435633 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:46:43.435801 systemd[1]: Stopped iscsiuio.service. Feb 12 19:46:43.440309 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:46:43.440424 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:46:43.443841 systemd[1]: Stopped target network.target. Feb 12 19:46:43.453884 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:46:43.453977 systemd[1]: Closed iscsiuio.socket. Feb 12 19:46:43.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.454655 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:46:43.454793 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:46:43.456905 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:46:43.463142 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:46:43.464157 systemd-networkd[688]: eth0: DHCPv6 lease lost Feb 12 19:46:43.483407 systemd-networkd[688]: eth1: DHCPv6 lease lost Feb 12 19:46:43.487037 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:46:43.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.487246 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:46:43.492456 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:46:43.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.495000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:46:43.492622 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:46:43.494353 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:46:43.497000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:46:43.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.494415 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:46:43.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.497629 systemd[1]: Stopping network-cleanup.service... Feb 12 19:46:43.498177 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:46:43.498284 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:46:43.499514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:46:43.499690 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:46:43.500553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:46:43.500619 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:46:43.503070 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:46:43.508127 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:46:43.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.509703 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:46:43.509918 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:46:43.512833 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:46:43.512924 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:46:43.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.516367 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:46:43.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.516432 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:46:43.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.517281 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:46:43.517390 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:46:43.518443 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:46:43.518514 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:46:43.519592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:46:43.519668 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:46:43.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.528414 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:46:43.529077 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 19:46:43.529247 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 19:46:43.530279 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:46:43.530337 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:46:43.530920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:46:43.530987 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:46:43.533452 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 19:46:43.534156 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:46:43.534356 systemd[1]: Stopped network-cleanup.service. Feb 12 19:46:43.564482 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:46:43.571350 kernel: kauditd_printk_skb: 42 callbacks suppressed Feb 12 19:46:43.571401 kernel: audit: type=1130 audit(1707767203.565:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.564706 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:46:43.576024 kernel: audit: type=1131 audit(1707767203.571:81): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:43.571599 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:46:43.578409 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:46:43.593184 systemd[1]: Switching root. Feb 12 19:46:43.604760 kernel: audit: type=1334 audit(1707767203.597:82): prog-id=5 op=UNLOAD Feb 12 19:46:43.604856 kernel: audit: type=1334 audit(1707767203.597:83): prog-id=4 op=UNLOAD Feb 12 19:46:43.604876 kernel: audit: type=1334 audit(1707767203.599:84): prog-id=3 op=UNLOAD Feb 12 19:46:43.604895 kernel: audit: type=1334 audit(1707767203.600:85): prog-id=8 op=UNLOAD Feb 12 19:46:43.604913 kernel: audit: type=1334 audit(1707767203.600:86): prog-id=7 op=UNLOAD Feb 12 19:46:43.597000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:46:43.597000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:46:43.599000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:46:43.600000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:46:43.600000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:46:43.629001 systemd-journald[184]: Journal stopped Feb 12 19:46:49.524221 systemd-journald[184]: Received SIGTERM from PID 1 (n/a). Feb 12 19:46:49.524342 kernel: audit: type=1335 audit(1707767203.629:87): pid=184 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Feb 12 19:46:49.524379 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:46:49.524398 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:46:49.524418 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:46:49.524443 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:46:49.524462 kernel: SELinux: policy capability open_perms=1 Feb 12 19:46:49.524482 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:46:49.524501 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:46:49.524519 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:46:49.524542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:46:49.524563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:46:49.524592 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:46:49.524614 kernel: audit: type=1403 audit(1707767204.100:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:46:49.524639 systemd[1]: Successfully loaded SELinux policy in 68.337ms. Feb 12 19:46:49.524677 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.422ms. Feb 12 19:46:49.524701 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:46:49.524724 systemd[1]: Detected virtualization kvm. Feb 12 19:46:49.524750 systemd[1]: Detected architecture x86-64. Feb 12 19:46:49.524777 systemd[1]: Detected first boot. Feb 12 19:46:49.524807 systemd[1]: Hostname set to . Feb 12 19:46:49.524848 systemd[1]: Initializing machine ID from VM UUID. Feb 12 19:46:49.524871 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:46:49.524898 kernel: audit: type=1400 audit(1707767204.482:89): avc: denied { associate } for pid=919 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:46:49.524918 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:46:49.524942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:46:49.524971 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:46:49.524995 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:46:49.525016 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:46:49.533162 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 19:46:49.533205 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:46:49.533229 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:46:49.533252 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 19:46:49.533276 systemd[1]: Created slice system-getty.slice. Feb 12 19:46:49.533309 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:46:49.533334 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:46:49.533357 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:46:49.533381 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:46:49.533403 systemd[1]: Created slice user.slice. Feb 12 19:46:49.533425 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:46:49.533448 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:46:49.533475 systemd[1]: Set up automount boot.automount. Feb 12 19:46:49.533496 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:46:49.533520 systemd[1]: Reached target integritysetup.target. Feb 12 19:46:49.533544 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:46:49.533567 systemd[1]: Reached target remote-fs.target. Feb 12 19:46:49.533590 systemd[1]: Reached target slices.target. Feb 12 19:46:49.533613 systemd[1]: Reached target swap.target. Feb 12 19:46:49.533636 systemd[1]: Reached target torcx.target. Feb 12 19:46:49.533665 systemd[1]: Reached target veritysetup.target. Feb 12 19:46:49.533689 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:46:49.533713 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:46:49.533736 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:46:49.533759 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 12 19:46:49.533787 kernel: audit: type=1400 audit(1707767209.221:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:46:49.533811 kernel: audit: type=1335 audit(1707767209.221:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:46:49.533834 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:46:49.533858 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:46:49.533886 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:46:49.533929 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:46:49.533952 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:46:49.534067 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:46:49.534097 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:46:49.534120 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:46:49.534144 systemd[1]: Mounting media.mount... Feb 12 19:46:49.534168 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:46:49.534191 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:46:49.534222 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:46:49.534246 systemd[1]: Mounting tmp.mount... Feb 12 19:46:49.534268 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:46:49.534290 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:46:49.534313 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:46:49.534340 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:46:49.534362 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:46:49.534386 systemd[1]: Starting modprobe@drm.service... Feb 12 19:46:49.534408 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:46:49.534433 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:46:49.534456 systemd[1]: Starting modprobe@loop.service... Feb 12 19:46:49.534479 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:46:49.534502 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 19:46:49.534525 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 19:46:49.534555 systemd[1]: Starting systemd-journald.service... Feb 12 19:46:49.534604 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:46:49.534626 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:46:49.534649 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:46:49.534691 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:46:49.534726 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:46:49.534749 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:46:49.534773 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:46:49.534795 systemd[1]: Mounted media.mount. Feb 12 19:46:49.534823 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:46:49.534845 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:46:49.534870 systemd[1]: Mounted tmp.mount. Feb 12 19:46:49.534903 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:46:49.534927 kernel: audit: type=1130 audit(1707767209.459:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.534950 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:46:49.534974 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:46:49.534998 kernel: audit: type=1130 audit(1707767209.469:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:46:49.535061 kernel: audit: type=1131 audit(1707767209.469:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535083 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:46:49.535106 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:46:49.535129 kernel: audit: type=1130 audit(1707767209.480:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535151 systemd[1]: Finished modprobe@drm.service. Feb 12 19:46:49.535175 kernel: audit: type=1131 audit(1707767209.480:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:46:49.535223 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:46:49.535246 kernel: audit: type=1130 audit(1707767209.492:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535268 kernel: audit: type=1131 audit(1707767209.492:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535289 kernel: audit: type=1130 audit(1707767209.505:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.535312 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:46:49.535336 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:46:49.535363 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:46:49.535386 systemd[1]: Reached target network-pre.target. Feb 12 19:46:49.535412 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:46:49.535435 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:46:49.535471 systemd-journald[1009]: Journal started Feb 12 19:46:49.535573 systemd-journald[1009]: Runtime Journal (/run/log/journal/d73e4982498646bb885d861ec19375c6) is 4.9M, max 39.5M, 34.5M free. Feb 12 19:46:49.221000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:46:49.221000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 19:46:49.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.515000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:46:49.515000 audit[1009]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffded002e0 a2=4000 a3=7fffded0037c items=0 ppid=1 pid=1009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:49.515000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:46:49.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.544677 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:46:49.544774 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:46:49.549099 kernel: fuse: init (API version 7.34) Feb 12 19:46:49.558332 kernel: loop: module loaded Feb 12 19:46:49.562715 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:46:49.573439 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:46:49.581823 systemd[1]: Started systemd-journald.service. Feb 12 19:46:49.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.586778 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:46:49.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.594429 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:46:49.596080 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:46:49.596410 systemd[1]: Finished modprobe@loop.service. Feb 12 19:46:49.597499 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:46:49.604991 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:46:49.612149 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:46:49.620290 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:46:49.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.627771 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:46:49.629595 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:46:49.631445 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:46:49.648924 systemd-journald[1009]: Time spent on flushing to /var/log/journal/d73e4982498646bb885d861ec19375c6 is 59.340ms for 1131 entries. Feb 12 19:46:49.648924 systemd-journald[1009]: System Journal (/var/log/journal/d73e4982498646bb885d861ec19375c6) is 8.0M, max 195.6M, 187.6M free. Feb 12 19:46:49.716770 systemd-journald[1009]: Received client request to flush runtime journal. Feb 12 19:46:49.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.698845 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:46:49.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.718612 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:46:49.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.760436 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:46:49.763132 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:46:49.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.765530 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:46:49.768393 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:46:49.796822 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:46:49.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.823388 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:46:49.826863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:46:49.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:49.876367 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:46:50.963879 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:46:50.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:50.979262 systemd[1]: Starting systemd-udevd.service... Feb 12 19:46:51.020945 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Feb 12 19:46:51.101242 systemd[1]: Started systemd-udevd.service. Feb 12 19:46:51.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.108928 systemd[1]: Starting systemd-networkd.service... Feb 12 19:46:51.129079 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:46:51.241482 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:46:51.249014 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:46:51.251530 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:46:51.255935 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:46:51.263215 systemd[1]: Starting modprobe@loop.service... Feb 12 19:46:51.264177 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:46:51.264312 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:46:51.264470 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:46:51.267744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:46:51.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.273684 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:46:51.275140 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:46:51.275426 systemd[1]: Finished modprobe@loop.service. Feb 12 19:46:51.289127 systemd[1]: Found device dev-ttyS0.device. Feb 12 19:46:51.293317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:46:51.294412 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:46:51.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.296793 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:46:51.296899 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:46:51.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.443113 systemd[1]: Started systemd-userdbd.service. Feb 12 19:46:51.684675 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 19:46:51.693562 kernel: ACPI: button: Power Button [PWRF] Feb 12 19:46:51.739183 systemd-networkd[1085]: lo: Link UP Feb 12 19:46:51.739840 systemd-networkd[1085]: lo: Gained carrier Feb 12 19:46:51.740902 systemd-networkd[1085]: Enumeration completed Feb 12 19:46:51.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:51.741343 systemd[1]: Started systemd-networkd.service. Feb 12 19:46:51.742863 systemd-networkd[1085]: eth1: Configuring with /run/systemd/network/10-7e:5d:78:03:18:1a.network. Feb 12 19:46:51.745173 systemd-networkd[1085]: eth0: Configuring with /run/systemd/network/10-ea:1e:da:42:70:90.network. Feb 12 19:46:51.747207 systemd-networkd[1085]: eth1: Link UP Feb 12 19:46:51.747406 systemd-networkd[1085]: eth1: Gained carrier Feb 12 19:46:51.754801 systemd-networkd[1085]: eth0: Link UP Feb 12 19:46:51.754815 systemd-networkd[1085]: eth0: Gained carrier Feb 12 19:46:51.803014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:46:51.778000 audit[1073]: AVC avc: denied { confidentiality } for pid=1073 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:46:51.778000 audit[1073]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558e9058ff30 a1=32194 a2=7f5bd2f4cbc5 a3=5 items=108 ppid=1069 pid=1073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:51.778000 audit: CWD cwd="/" Feb 12 19:46:51.778000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=1 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=2 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.849121 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 19:46:51.778000 audit: PATH item=3 name=(null) inode=14326 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=4 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=5 name=(null) inode=14327 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=6 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=7 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=8 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=9 name=(null) inode=14329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=10 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=11 name=(null) inode=14330 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=12 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=13 name=(null) inode=14331 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=14 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=15 name=(null) inode=14332 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=16 name=(null) inode=14328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=17 name=(null) inode=14333 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=18 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=19 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=20 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=21 name=(null) inode=14335 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=22 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=23 name=(null) inode=14336 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=24 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=25 name=(null) inode=14337 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=26 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=27 name=(null) inode=14338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=28 name=(null) inode=14334 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=29 name=(null) inode=14339 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=30 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=31 name=(null) inode=14340 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=32 name=(null) inode=14340 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=33 name=(null) inode=14341 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=34 name=(null) inode=14340 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=35 name=(null) inode=14342 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=36 name=(null) inode=14340 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=37 name=(null) inode=14343 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=38 name=(null) inode=14340 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=39 name=(null) inode=14344 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=40 name=(null) inode=14340 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=41 name=(null) inode=14345 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=42 name=(null) inode=14325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=43 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=44 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=45 name=(null) inode=14347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=46 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=47 name=(null) inode=14348 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=48 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=49 name=(null) inode=14349 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=50 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=51 name=(null) inode=14350 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=52 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=53 name=(null) inode=14351 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=55 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=56 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=57 name=(null) inode=14353 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=58 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=59 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=60 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=61 name=(null) inode=14355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=62 name=(null) inode=14355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=63 name=(null) inode=14356 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=64 name=(null) inode=14355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=65 name=(null) inode=14357 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=66 name=(null) inode=14355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=67 name=(null) inode=14358 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=68 name=(null) inode=14355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=69 name=(null) inode=14359 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=70 name=(null) inode=14355 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=71 name=(null) inode=14360 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=72 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=73 name=(null) inode=14361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=74 name=(null) inode=14361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=75 name=(null) inode=14362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=76 name=(null) inode=14361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=77 name=(null) inode=14363 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=78 name=(null) inode=14361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=79 name=(null) inode=14364 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=80 name=(null) inode=14361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=81 name=(null) inode=14365 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=82 name=(null) inode=14361 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=83 name=(null) inode=14366 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=84 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=85 name=(null) inode=14367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=86 name=(null) inode=14367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=87 name=(null) inode=14368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=88 name=(null) inode=14367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=89 name=(null) inode=14369 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=90 name=(null) inode=14367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=91 name=(null) inode=14370 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=92 name=(null) inode=14367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=93 name=(null) inode=14371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=94 name=(null) inode=14367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=95 name=(null) inode=14372 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=96 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=97 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=98 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=99 name=(null) inode=14374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=100 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=101 name=(null) inode=14375 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=102 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=103 name=(null) inode=14376 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=104 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=105 name=(null) inode=14377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=106 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PATH item=107 name=(null) inode=14378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:46:51.778000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:46:51.892083 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 19:46:51.910059 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:46:52.062080 kernel: EDAC MC: Ver: 3.0.0 Feb 12 19:46:52.087898 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:46:52.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.091427 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:46:52.123220 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:46:52.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.160251 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:46:52.161108 systemd[1]: Reached target cryptsetup.target. Feb 12 19:46:52.164183 systemd[1]: Starting lvm2-activation.service... Feb 12 19:46:52.179617 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:46:52.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.216164 systemd[1]: Finished lvm2-activation.service. Feb 12 19:46:52.216918 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:46:52.220088 systemd[1]: Mounting media-configdrive.mount... Feb 12 19:46:52.220625 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:46:52.220697 systemd[1]: Reached target machines.target. Feb 12 19:46:52.223435 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:46:52.247385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:46:52.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.258070 kernel: ISO 9660 Extensions: RRIP_1991A Feb 12 19:46:52.260683 systemd[1]: Mounted media-configdrive.mount. Feb 12 19:46:52.261475 systemd[1]: Reached target local-fs.target. Feb 12 19:46:52.264166 systemd[1]: Starting ldconfig.service... Feb 12 19:46:52.267026 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:46:52.267157 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:46:52.271673 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:46:52.275772 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:46:52.276887 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:46:52.277204 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:46:52.279269 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:46:52.303937 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1124 (bootctl) Feb 12 19:46:52.306307 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:46:52.315578 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:46:52.324260 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:46:52.329215 systemd-tmpfiles[1126]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:46:52.480308 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:46:52.483779 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:46:52.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.519287 systemd-fsck[1130]: fsck.fat 4.2 (2021-01-31) Feb 12 19:46:52.519287 systemd-fsck[1130]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 19:46:52.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.523838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:46:52.527130 systemd[1]: Mounting boot.mount... Feb 12 19:46:52.554274 systemd[1]: Mounted boot.mount. Feb 12 19:46:52.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.600581 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:46:52.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.814960 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:46:52.817871 systemd[1]: Starting audit-rules.service... Feb 12 19:46:52.821570 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:46:52.824826 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:46:52.833276 systemd[1]: Starting systemd-resolved.service... Feb 12 19:46:52.841254 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:46:52.845557 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:46:52.847550 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:46:52.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.855297 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:46:52.881000 audit[1144]: SYSTEM_BOOT pid=1144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:52.887331 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:46:53.015077 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:46:53.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:46:53.025229 augenrules[1160]: No rules Feb 12 19:46:53.024000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:46:53.024000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe32fb1700 a2=420 a3=0 items=0 ppid=1138 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:46:53.024000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:46:53.026742 systemd[1]: Finished audit-rules.service. Feb 12 19:46:53.036637 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 12 19:46:53.072735 systemd-resolved[1141]: Positive Trust Anchors: Feb 12 19:46:53.073412 systemd-resolved[1141]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:46:53.073544 systemd-resolved[1141]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:46:53.100075 systemd-resolved[1141]: Using system hostname 'ci-3510.3.2-3-7482959a87'. Feb 12 19:46:53.106725 systemd[1]: Started systemd-resolved.service. Feb 12 19:46:53.107719 systemd[1]: Reached target network.target. Feb 12 19:46:53.108264 systemd[1]: Reached target nss-lookup.target. Feb 12 19:46:53.167127 systemd-networkd[1085]: eth1: Gained IPv6LL Feb 12 19:46:53.181604 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:46:53.215844 systemd[1]: Finished ldconfig.service. Feb 12 19:46:53.219711 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:46:53.220856 systemd[1]: Reached target time-set.target. Feb 12 19:46:53.224805 systemd[1]: Starting systemd-update-done.service... Feb 12 19:46:53.243386 systemd[1]: Finished systemd-update-done.service. Feb 12 19:46:53.244280 systemd[1]: Reached target sysinit.target. Feb 12 19:46:53.244983 systemd[1]: Started motdgen.path. Feb 12 19:46:53.245527 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:46:53.246352 systemd[1]: Started logrotate.timer. Feb 12 19:46:53.247073 systemd[1]: Started mdadm.timer. Feb 12 19:46:53.247645 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:46:53.248330 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:46:53.248390 systemd[1]: Reached target paths.target. Feb 12 19:46:53.249090 systemd[1]: Reached target timers.target. Feb 12 19:46:53.250193 systemd[1]: Listening on dbus.socket. Feb 12 19:46:53.253588 systemd[1]: Starting docker.socket... Feb 12 19:46:53.263373 systemd[1]: Listening on sshd.socket. Feb 12 19:46:53.264129 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:46:53.265230 systemd[1]: Listening on docker.socket. Feb 12 19:46:53.266220 systemd[1]: Reached target sockets.target. Feb 12 19:46:53.266737 systemd[1]: Reached target basic.target. Feb 12 19:46:53.267536 systemd[1]: System is tainted: cgroupsv1 Feb 12 19:46:53.267601 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:46:53.267635 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:46:53.270194 systemd[1]: Starting containerd.service... Feb 12 19:46:53.273557 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 19:46:53.276299 systemd-timesyncd[1142]: Contacted time server 168.235.86.33:123 (0.flatcar.pool.ntp.org). Feb 12 19:46:53.276398 systemd-timesyncd[1142]: Initial clock synchronization to Mon 2024-02-12 19:46:53.629258 UTC. Feb 12 19:46:53.276967 systemd[1]: Starting dbus.service... Feb 12 19:46:53.284791 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:46:53.297428 systemd[1]: Starting extend-filesystems.service... Feb 12 19:46:53.298197 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:46:53.301609 systemd[1]: Starting motdgen.service... Feb 12 19:46:53.308752 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:46:53.315233 jq[1179]: false Feb 12 19:46:53.315294 systemd[1]: Starting prepare-critools.service... Feb 12 19:46:53.319771 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:46:53.325894 systemd[1]: Starting sshd-keygen.service... Feb 12 19:46:53.338194 systemd[1]: Starting systemd-logind.service... Feb 12 19:46:53.338993 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:46:53.339178 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:46:53.409257 jq[1193]: true Feb 12 19:46:53.342580 systemd[1]: Starting update-engine.service... Feb 12 19:46:53.349100 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:46:53.356525 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:46:53.410014 tar[1195]: ./ Feb 12 19:46:53.410014 tar[1195]: ./macvlan Feb 12 19:46:53.356986 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:46:53.369154 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:46:53.369533 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:46:53.423929 tar[1196]: crictl Feb 12 19:46:53.427577 jq[1211]: true Feb 12 19:46:53.482005 extend-filesystems[1180]: Found vda Feb 12 19:46:53.485202 dbus-daemon[1175]: [system] SELinux support is enabled Feb 12 19:46:53.485538 systemd[1]: Started dbus.service. Feb 12 19:46:53.489347 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:46:53.489387 systemd[1]: Reached target system-config.target. Feb 12 19:46:53.489849 extend-filesystems[1180]: Found vda1 Feb 12 19:46:53.490081 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:46:53.492581 systemd[1]: Starting user-configdrive.service... Feb 12 19:46:53.492951 extend-filesystems[1180]: Found vda2 Feb 12 19:46:53.494314 extend-filesystems[1180]: Found vda3 Feb 12 19:46:53.494314 extend-filesystems[1180]: Found usr Feb 12 19:46:53.494314 extend-filesystems[1180]: Found vda4 Feb 12 19:46:53.499755 extend-filesystems[1180]: Found vda6 Feb 12 19:46:53.499755 extend-filesystems[1180]: Found vda7 Feb 12 19:46:53.499755 extend-filesystems[1180]: Found vda9 Feb 12 19:46:53.538264 extend-filesystems[1180]: Checking size of /dev/vda9 Feb 12 19:46:53.530564 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:46:53.530950 systemd[1]: Finished motdgen.service. Feb 12 19:46:53.599824 bash[1242]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:46:53.600706 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:46:53.614599 coreos-cloudinit[1220]: 2024/02/12 19:46:53 Checking availability of "cloud-drive" Feb 12 19:46:53.615318 coreos-cloudinit[1220]: 2024/02/12 19:46:53 Fetching user-data from datasource of type "cloud-drive" Feb 12 19:46:53.615474 coreos-cloudinit[1220]: 2024/02/12 19:46:53 Attempting to read from "/media/configdrive/openstack/latest/user_data" Feb 12 19:46:53.621620 coreos-cloudinit[1220]: 2024/02/12 19:46:53 Fetching meta-data from datasource of type "cloud-drive" Feb 12 19:46:53.622822 coreos-cloudinit[1220]: 2024/02/12 19:46:53 Attempting to read from "/media/configdrive/openstack/latest/meta_data.json" Feb 12 19:46:53.645760 extend-filesystems[1180]: Resized partition /dev/vda9 Feb 12 19:46:53.673438 extend-filesystems[1248]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 19:46:53.678853 coreos-cloudinit[1220]: Detected an Ignition config. Exiting... Feb 12 19:46:53.681106 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 12 19:46:53.688120 systemd[1]: Finished user-configdrive.service. Feb 12 19:46:53.688828 systemd[1]: Reached target user-config.target. Feb 12 19:46:53.706814 update_engine[1192]: I0212 19:46:53.705420 1192 main.cc:92] Flatcar Update Engine starting Feb 12 19:46:53.717827 systemd[1]: Started update-engine.service. Feb 12 19:46:53.718407 update_engine[1192]: I0212 19:46:53.718362 1192 update_check_scheduler.cc:74] Next update check in 11m39s Feb 12 19:46:53.721040 systemd[1]: Started locksmithd.service. Feb 12 19:46:53.722137 tar[1195]: ./static Feb 12 19:46:53.736858 env[1209]: time="2024-02-12T19:46:53.736753391Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:46:53.813059 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 12 19:46:55.530400 extend-filesystems[1248]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 19:46:55.530400 extend-filesystems[1248]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 12 19:46:55.530400 extend-filesystems[1248]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 12 19:46:55.547329 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Feb 12 19:46:55.547329 extend-filesystems[1180]: Found vdb Feb 12 19:46:55.531724 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:46:55.532131 systemd[1]: Finished extend-filesystems.service. Feb 12 19:46:55.601160 coreos-metadata[1174]: Feb 12 19:46:55.600 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:46:55.602665 systemd-logind[1191]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 19:46:55.602698 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:46:55.603081 systemd-logind[1191]: New seat seat0. Feb 12 19:46:55.623725 systemd[1]: Started systemd-logind.service. Feb 12 19:46:55.628152 coreos-metadata[1174]: Feb 12 19:46:55.625 INFO Fetch successful Feb 12 19:46:55.629093 tar[1195]: ./vlan Feb 12 19:46:55.633510 env[1209]: time="2024-02-12T19:46:55.632983745Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:46:55.633510 env[1209]: time="2024-02-12T19:46:55.633250346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:46:55.640185 unknown[1174]: wrote ssh authorized keys file for user: core Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660012551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660104121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660637982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660743883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660766534Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660795546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.660958324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.661608116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.662091390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:46:55.662818 env[1209]: time="2024-02-12T19:46:55.662141170Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:46:55.666616 env[1209]: time="2024-02-12T19:46:55.662266015Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:46:55.666616 env[1209]: time="2024-02-12T19:46:55.662305804Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:46:55.673343 update-ssh-keys[1256]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:46:55.674395 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 19:46:55.682218 env[1209]: time="2024-02-12T19:46:55.682136594Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:46:55.682554 env[1209]: time="2024-02-12T19:46:55.682503226Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:46:55.682800 env[1209]: time="2024-02-12T19:46:55.682773400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:46:55.683061 env[1209]: time="2024-02-12T19:46:55.683013968Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.683252 env[1209]: time="2024-02-12T19:46:55.683226905Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.683389 env[1209]: time="2024-02-12T19:46:55.683366912Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.683510 env[1209]: time="2024-02-12T19:46:55.683490255Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.683628 env[1209]: time="2024-02-12T19:46:55.683608844Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.683744 env[1209]: time="2024-02-12T19:46:55.683725001Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.683903 env[1209]: time="2024-02-12T19:46:55.683880762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.684078 env[1209]: time="2024-02-12T19:46:55.684054801Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.684233 env[1209]: time="2024-02-12T19:46:55.684209429Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:46:55.684621 env[1209]: time="2024-02-12T19:46:55.684594334Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:46:55.684939 env[1209]: time="2024-02-12T19:46:55.684915546Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:46:55.688005 env[1209]: time="2024-02-12T19:46:55.687889551Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:46:55.695815 env[1209]: time="2024-02-12T19:46:55.695757038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.695815 env[1209]: time="2024-02-12T19:46:55.695811978Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.695969094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.695998014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696018231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696039019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696083700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696108005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696128028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696149140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696187 env[1209]: time="2024-02-12T19:46:55.696179274Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:46:55.696577 env[1209]: time="2024-02-12T19:46:55.696430215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696577 env[1209]: time="2024-02-12T19:46:55.696468455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696577 env[1209]: time="2024-02-12T19:46:55.696491848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.696577 env[1209]: time="2024-02-12T19:46:55.696514416Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:46:55.696577 env[1209]: time="2024-02-12T19:46:55.696539025Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:46:55.696577 env[1209]: time="2024-02-12T19:46:55.696564375Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:46:55.696859 env[1209]: time="2024-02-12T19:46:55.696616499Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:46:55.696859 env[1209]: time="2024-02-12T19:46:55.696684141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:46:55.698047 env[1209]: time="2024-02-12T19:46:55.697020438Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:46:55.698047 env[1209]: time="2024-02-12T19:46:55.697152275Z" level=info msg="Connect containerd service" Feb 12 19:46:55.698047 env[1209]: time="2024-02-12T19:46:55.697240645Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:46:55.703349 env[1209]: time="2024-02-12T19:46:55.698152933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:46:55.703349 env[1209]: time="2024-02-12T19:46:55.698626989Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:46:55.703349 env[1209]: time="2024-02-12T19:46:55.698689341Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:46:55.703349 env[1209]: time="2024-02-12T19:46:55.698789273Z" level=info msg="containerd successfully booted in 1.970862s" Feb 12 19:46:55.699051 systemd[1]: Started containerd.service. Feb 12 19:46:55.705971 env[1209]: time="2024-02-12T19:46:55.705840176Z" level=info msg="Start subscribing containerd event" Feb 12 19:46:55.712490 env[1209]: time="2024-02-12T19:46:55.712022473Z" level=info msg="Start recovering state" Feb 12 19:46:55.713146 env[1209]: time="2024-02-12T19:46:55.713111240Z" level=info msg="Start event monitor" Feb 12 19:46:55.713563 env[1209]: time="2024-02-12T19:46:55.713528909Z" level=info msg="Start snapshots syncer" Feb 12 19:46:55.714100 env[1209]: time="2024-02-12T19:46:55.714043255Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:46:55.714243 env[1209]: time="2024-02-12T19:46:55.714214043Z" level=info msg="Start streaming server" Feb 12 19:46:55.756054 tar[1195]: ./portmap Feb 12 19:46:55.870949 tar[1195]: ./host-local Feb 12 19:46:55.950609 tar[1195]: ./vrf Feb 12 19:46:56.015709 tar[1195]: ./bridge Feb 12 19:46:56.089355 tar[1195]: ./tuning Feb 12 19:46:56.154503 tar[1195]: ./firewall Feb 12 19:46:56.231780 tar[1195]: ./host-device Feb 12 19:46:56.299129 tar[1195]: ./sbr Feb 12 19:46:56.378182 tar[1195]: ./loopback Feb 12 19:46:56.437945 tar[1195]: ./dhcp Feb 12 19:46:56.639530 tar[1195]: ./ptp Feb 12 19:46:56.740146 tar[1195]: ./ipvlan Feb 12 19:46:56.753682 systemd[1]: Finished prepare-critools.service. Feb 12 19:46:56.814142 tar[1195]: ./bandwidth Feb 12 19:46:56.896962 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:46:57.046634 locksmithd[1250]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:46:57.331943 sshd_keygen[1215]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:46:57.369070 systemd[1]: Finished sshd-keygen.service. Feb 12 19:46:57.373254 systemd[1]: Starting issuegen.service... Feb 12 19:46:57.395525 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:46:57.395939 systemd[1]: Finished issuegen.service. Feb 12 19:46:57.401856 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:46:57.424465 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:46:57.436171 systemd[1]: Started getty@tty1.service. Feb 12 19:46:57.439720 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:46:57.440929 systemd[1]: Reached target getty.target. Feb 12 19:46:57.441631 systemd[1]: Reached target multi-user.target. Feb 12 19:46:57.448857 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:46:57.467707 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:46:57.468681 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:46:57.493336 systemd[1]: Startup finished in 10.607s (kernel) + 13.482s (userspace) = 24.090s. Feb 12 19:47:02.662589 systemd[1]: Created slice system-sshd.slice. Feb 12 19:47:02.670152 systemd[1]: Started sshd@0-64.23.171.188:22-139.178.68.195:60160.service. Feb 12 19:47:02.889931 sshd[1293]: Accepted publickey for core from 139.178.68.195 port 60160 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:02.893867 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:02.931296 systemd[1]: Created slice user-500.slice. Feb 12 19:47:02.933190 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:47:02.957588 systemd-logind[1191]: New session 1 of user core. Feb 12 19:47:02.989984 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:47:03.009247 systemd[1]: Starting user@500.service... Feb 12 19:47:03.021903 (systemd)[1298]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:03.317380 systemd[1298]: Queued start job for default target default.target. Feb 12 19:47:03.319644 systemd[1298]: Reached target paths.target. Feb 12 19:47:03.319937 systemd[1298]: Reached target sockets.target. Feb 12 19:47:03.320124 systemd[1298]: Reached target timers.target. Feb 12 19:47:03.320335 systemd[1298]: Reached target basic.target. Feb 12 19:47:03.320543 systemd[1298]: Reached target default.target. Feb 12 19:47:03.320711 systemd[1298]: Startup finished in 281ms. Feb 12 19:47:03.321094 systemd[1]: Started user@500.service. Feb 12 19:47:03.323058 systemd[1]: Started session-1.scope. Feb 12 19:47:03.403173 systemd[1]: Started sshd@1-64.23.171.188:22-139.178.68.195:60172.service. Feb 12 19:47:03.501118 sshd[1307]: Accepted publickey for core from 139.178.68.195 port 60172 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:03.504764 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:03.542837 systemd[1]: Started session-2.scope. Feb 12 19:47:03.543442 systemd-logind[1191]: New session 2 of user core. Feb 12 19:47:03.632316 sshd[1307]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:03.641789 systemd[1]: Started sshd@2-64.23.171.188:22-139.178.68.195:60180.service. Feb 12 19:47:03.644627 systemd[1]: sshd@1-64.23.171.188:22-139.178.68.195:60172.service: Deactivated successfully. Feb 12 19:47:03.647486 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 19:47:03.652454 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Feb 12 19:47:03.655156 systemd-logind[1191]: Removed session 2. Feb 12 19:47:03.723307 sshd[1312]: Accepted publickey for core from 139.178.68.195 port 60180 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:03.726549 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:03.742519 systemd-logind[1191]: New session 3 of user core. Feb 12 19:47:03.743821 systemd[1]: Started session-3.scope. Feb 12 19:47:03.818185 sshd[1312]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:03.823447 systemd[1]: Started sshd@3-64.23.171.188:22-139.178.68.195:60190.service. Feb 12 19:47:03.824666 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Feb 12 19:47:03.827065 systemd[1]: sshd@2-64.23.171.188:22-139.178.68.195:60180.service: Deactivated successfully. Feb 12 19:47:03.828455 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 19:47:03.830809 systemd-logind[1191]: Removed session 3. Feb 12 19:47:03.895693 sshd[1319]: Accepted publickey for core from 139.178.68.195 port 60190 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:03.898523 sshd[1319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:03.913124 systemd-logind[1191]: New session 4 of user core. Feb 12 19:47:03.913820 systemd[1]: Started session-4.scope. Feb 12 19:47:04.012492 sshd[1319]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:04.018981 systemd[1]: Started sshd@4-64.23.171.188:22-139.178.68.195:60198.service. Feb 12 19:47:04.026579 systemd[1]: sshd@3-64.23.171.188:22-139.178.68.195:60190.service: Deactivated successfully. Feb 12 19:47:04.028443 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:47:04.031555 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:47:04.036677 systemd-logind[1191]: Removed session 4. Feb 12 19:47:04.100236 sshd[1326]: Accepted publickey for core from 139.178.68.195 port 60198 ssh2: RSA SHA256:LDsRqpNYdTYD100G09SwfYn1R0SNt/l+VxRWb4eNCNc Feb 12 19:47:04.103688 sshd[1326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:47:04.114643 systemd[1]: Started session-5.scope. Feb 12 19:47:04.114989 systemd-logind[1191]: New session 5 of user core. Feb 12 19:47:04.215282 sudo[1332]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:47:04.216345 sudo[1332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:47:04.903808 systemd[1]: Reloading. Feb 12 19:47:05.165155 /usr/lib/systemd/system-generators/torcx-generator[1361]: time="2024-02-12T19:47:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:47:05.166759 /usr/lib/systemd/system-generators/torcx-generator[1361]: time="2024-02-12T19:47:05Z" level=info msg="torcx already run" Feb 12 19:47:05.434805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:47:05.434855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:47:05.480648 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:47:05.631215 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:47:05.644665 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:47:05.646117 systemd[1]: Reached target network-online.target. Feb 12 19:47:05.650515 systemd[1]: Started kubelet.service. Feb 12 19:47:05.675600 systemd[1]: Starting coreos-metadata.service... Feb 12 19:47:05.766766 coreos-metadata[1423]: Feb 12 19:47:05.766 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 12 19:47:05.784968 coreos-metadata[1423]: Feb 12 19:47:05.784 INFO Fetch successful Feb 12 19:47:05.814088 kubelet[1415]: E0212 19:47:05.811188 1415 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 19:47:05.814469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:47:05.814744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:47:05.823466 systemd[1]: Finished coreos-metadata.service. Feb 12 19:47:06.776122 systemd[1]: Stopped kubelet.service. Feb 12 19:47:06.838949 systemd[1]: Reloading. Feb 12 19:47:07.066564 /usr/lib/systemd/system-generators/torcx-generator[1484]: time="2024-02-12T19:47:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:47:07.067697 /usr/lib/systemd/system-generators/torcx-generator[1484]: time="2024-02-12T19:47:07Z" level=info msg="torcx already run" Feb 12 19:47:07.296767 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:47:07.312320 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:47:07.356941 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:47:07.610763 systemd[1]: Started kubelet.service. Feb 12 19:47:07.785054 kubelet[1536]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:47:07.785653 kubelet[1536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:47:07.785973 kubelet[1536]: I0212 19:47:07.785904 1536 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:47:07.788977 kubelet[1536]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 19:47:07.789221 kubelet[1536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:47:09.143486 kubelet[1536]: I0212 19:47:09.143438 1536 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 19:47:09.144178 kubelet[1536]: I0212 19:47:09.144131 1536 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:47:09.147936 kubelet[1536]: I0212 19:47:09.144510 1536 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 19:47:09.170399 kubelet[1536]: I0212 19:47:09.153012 1536 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:47:09.172327 kubelet[1536]: I0212 19:47:09.172285 1536 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:47:09.172806 kubelet[1536]: I0212 19:47:09.172773 1536 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 19:47:09.173276 kubelet[1536]: I0212 19:47:09.173245 1536 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 19:47:09.173418 kubelet[1536]: I0212 19:47:09.173401 1536 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 19:47:09.173707 kubelet[1536]: I0212 19:47:09.173684 1536 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:47:09.174349 kubelet[1536]: I0212 19:47:09.170324 1536 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:47:09.193496 kubelet[1536]: I0212 19:47:09.193449 1536 kubelet.go:398] "Attempting to sync node with API server" Feb 12 19:47:09.193766 kubelet[1536]: I0212 19:47:09.193742 1536 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:47:09.193931 kubelet[1536]: I0212 19:47:09.193910 1536 kubelet.go:297] "Adding apiserver pod source" Feb 12 19:47:09.194190 kubelet[1536]: I0212 19:47:09.194159 1536 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:47:09.195450 kubelet[1536]: E0212 19:47:09.195414 1536 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:09.195827 kubelet[1536]: E0212 19:47:09.195786 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:09.197082 kubelet[1536]: I0212 19:47:09.196831 1536 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:47:09.198984 kubelet[1536]: W0212 19:47:09.197793 1536 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:47:09.198984 kubelet[1536]: I0212 19:47:09.198659 1536 server.go:1186] "Started kubelet" Feb 12 19:47:09.199405 kubelet[1536]: I0212 19:47:09.199375 1536 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:47:09.200807 kubelet[1536]: I0212 19:47:09.200768 1536 server.go:451] "Adding debug handlers to kubelet server" Feb 12 19:47:09.202875 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:47:09.203311 kubelet[1536]: I0212 19:47:09.203275 1536 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:47:09.212668 kubelet[1536]: E0212 19:47:09.211678 1536 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:47:09.212668 kubelet[1536]: E0212 19:47:09.211753 1536 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:47:09.213328 kubelet[1536]: I0212 19:47:09.213292 1536 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 19:47:09.214072 kubelet[1536]: I0212 19:47:09.214005 1536 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:47:09.247785 kubelet[1536]: W0212 19:47:09.247705 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:09.247785 kubelet[1536]: E0212 19:47:09.247789 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:09.248095 kubelet[1536]: W0212 19:47:09.247862 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:09.248095 kubelet[1536]: E0212 19:47:09.247879 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:09.248095 kubelet[1536]: W0212 19:47:09.247919 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:09.248095 kubelet[1536]: E0212 19:47:09.247931 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:09.248308 kubelet[1536]: E0212 19:47:09.247994 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f2f6fc6bb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 198616251, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 198616251, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.250886 kubelet[1536]: E0212 19:47:09.249868 1536 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "64.23.171.188" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:47:09.257344 kubelet[1536]: E0212 19:47:09.254332 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3037962b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 211711019, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 211711019, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.315173 kubelet[1536]: I0212 19:47:09.315136 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:09.317546 kubelet[1536]: E0212 19:47:09.317503 1536 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="64.23.171.188" Feb 12 19:47:09.318250 kubelet[1536]: E0212 19:47:09.318096 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.325531 kubelet[1536]: E0212 19:47:09.325340 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.328586 kubelet[1536]: E0212 19:47:09.328425 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.350432 kubelet[1536]: I0212 19:47:09.349056 1536 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:47:09.350641 kubelet[1536]: I0212 19:47:09.350494 1536 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:47:09.350641 kubelet[1536]: I0212 19:47:09.350539 1536 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:47:09.351820 kubelet[1536]: E0212 19:47:09.351633 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 346672772, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660daa3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.373462 kubelet[1536]: E0212 19:47:09.373317 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 346684438, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660fd88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.386023 kubelet[1536]: E0212 19:47:09.385869 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 346695949, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f36611086" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.403065 kubelet[1536]: I0212 19:47:09.400146 1536 policy_none.go:49] "None policy: Start" Feb 12 19:47:09.404959 kubelet[1536]: I0212 19:47:09.404921 1536 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:47:09.405354 kubelet[1536]: I0212 19:47:09.405330 1536 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:47:09.461987 kubelet[1536]: E0212 19:47:09.461943 1536 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "64.23.171.188" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:47:09.497637 kubelet[1536]: I0212 19:47:09.497584 1536 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:47:09.498595 kubelet[1536]: I0212 19:47:09.498556 1536 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:47:09.503002 kubelet[1536]: E0212 19:47:09.502947 1536 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"64.23.171.188\" not found" Feb 12 19:47:09.504000 kubelet[1536]: E0212 19:47:09.503867 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f4178eb16", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 501205270, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 501205270, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.520222 kubelet[1536]: I0212 19:47:09.520187 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:09.523453 kubelet[1536]: E0212 19:47:09.523136 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 520103162, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660daa3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.524306 kubelet[1536]: E0212 19:47:09.524278 1536 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="64.23.171.188" Feb 12 19:47:09.527877 kubelet[1536]: E0212 19:47:09.527750 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 520112507, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660fd88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.627804 kubelet[1536]: E0212 19:47:09.618532 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 520119581, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f36611086" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:09.810237 kubelet[1536]: I0212 19:47:09.810075 1536 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 19:47:09.873868 kubelet[1536]: E0212 19:47:09.873812 1536 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "64.23.171.188" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:47:09.918942 kubelet[1536]: I0212 19:47:09.918898 1536 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 19:47:09.919426 kubelet[1536]: I0212 19:47:09.919400 1536 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 19:47:09.919583 kubelet[1536]: I0212 19:47:09.919568 1536 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 19:47:09.919762 kubelet[1536]: E0212 19:47:09.919749 1536 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 19:47:09.922437 kubelet[1536]: W0212 19:47:09.922396 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:09.922961 kubelet[1536]: E0212 19:47:09.922942 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:09.926435 kubelet[1536]: I0212 19:47:09.925771 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:09.929789 kubelet[1536]: E0212 19:47:09.929711 1536 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="64.23.171.188" Feb 12 19:47:09.930715 kubelet[1536]: E0212 19:47:09.930506 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 925712099, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660daa3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:10.011171 kubelet[1536]: E0212 19:47:10.011020 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 925721658, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660fd88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:10.166575 kubelet[1536]: W0212 19:47:10.166401 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:10.167604 kubelet[1536]: E0212 19:47:10.167568 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:10.196730 kubelet[1536]: E0212 19:47:10.196573 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:10.210360 kubelet[1536]: E0212 19:47:10.210123 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 925726715, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f36611086" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:10.305332 kubelet[1536]: W0212 19:47:10.305235 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:10.306571 kubelet[1536]: E0212 19:47:10.306470 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:10.622113 kubelet[1536]: W0212 19:47:10.621913 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:10.622389 kubelet[1536]: E0212 19:47:10.622358 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:10.677967 kubelet[1536]: E0212 19:47:10.677894 1536 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "64.23.171.188" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:47:10.731841 kubelet[1536]: I0212 19:47:10.731790 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:10.734319 kubelet[1536]: E0212 19:47:10.734274 1536 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="64.23.171.188" Feb 12 19:47:10.734542 kubelet[1536]: E0212 19:47:10.734361 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 10, 731732594, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660daa3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:10.736355 kubelet[1536]: E0212 19:47:10.736184 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 10, 731742427, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660fd88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:10.769586 kubelet[1536]: W0212 19:47:10.769482 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:10.769944 kubelet[1536]: E0212 19:47:10.769911 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:10.810809 kubelet[1536]: E0212 19:47:10.810651 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 10, 731747793, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f36611086" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:11.196876 kubelet[1536]: E0212 19:47:11.196819 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:12.198610 kubelet[1536]: E0212 19:47:12.198550 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:12.235786 kubelet[1536]: W0212 19:47:12.235679 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:12.235786 kubelet[1536]: E0212 19:47:12.235759 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:12.281247 kubelet[1536]: E0212 19:47:12.281190 1536 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "64.23.171.188" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:47:12.335995 kubelet[1536]: I0212 19:47:12.335932 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:12.340840 kubelet[1536]: E0212 19:47:12.340787 1536 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="64.23.171.188" Feb 12 19:47:12.343395 kubelet[1536]: E0212 19:47:12.340831 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 12, 335869169, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660daa3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:12.350070 kubelet[1536]: E0212 19:47:12.349814 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 12, 335883125, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660fd88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:12.352742 kubelet[1536]: E0212 19:47:12.352471 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 12, 335888438, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f36611086" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:12.405602 kubelet[1536]: W0212 19:47:12.405492 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:12.405788 kubelet[1536]: E0212 19:47:12.405629 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:12.800956 kubelet[1536]: W0212 19:47:12.793389 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:12.800956 kubelet[1536]: E0212 19:47:12.800823 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:12.928652 kubelet[1536]: W0212 19:47:12.928607 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:12.928652 kubelet[1536]: E0212 19:47:12.928652 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:13.199577 kubelet[1536]: E0212 19:47:13.199332 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:14.199963 kubelet[1536]: E0212 19:47:14.199700 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:15.200452 kubelet[1536]: E0212 19:47:15.200334 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:15.484215 kubelet[1536]: E0212 19:47:15.484068 1536 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "64.23.171.188" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 19:47:15.544821 kubelet[1536]: I0212 19:47:15.544766 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:15.547193 kubelet[1536]: E0212 19:47:15.547153 1536 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="64.23.171.188" Feb 12 19:47:15.547906 kubelet[1536]: E0212 19:47:15.547666 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660daa3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 64.23.171.188 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315078819, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 15, 544664641, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660daa3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:15.549953 kubelet[1536]: E0212 19:47:15.549807 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f3660fd88", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 64.23.171.188 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315087752, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 15, 544680055, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f3660fd88" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:15.552219 kubelet[1536]: E0212 19:47:15.552091 1536 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"64.23.171.188.17b3353f36611086", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"64.23.171.188", UID:"64.23.171.188", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 64.23.171.188 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"64.23.171.188"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 47, 9, 315092614, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 47, 15, 544727730, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "64.23.171.188.17b3353f36611086" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 19:47:16.201732 kubelet[1536]: E0212 19:47:16.201651 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:17.208523 kubelet[1536]: E0212 19:47:17.208404 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:17.597554 kubelet[1536]: W0212 19:47:17.597395 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:17.597785 kubelet[1536]: E0212 19:47:17.597768 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 19:47:17.821098 kubelet[1536]: W0212 19:47:17.820875 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:17.821461 kubelet[1536]: E0212 19:47:17.821419 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 19:47:17.927706 kubelet[1536]: W0212 19:47:17.927533 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:17.927706 kubelet[1536]: E0212 19:47:17.927592 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 19:47:17.957206 kubelet[1536]: W0212 19:47:17.957115 1536 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:17.957520 kubelet[1536]: E0212 19:47:17.957498 1536 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "64.23.171.188" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 19:47:18.209191 kubelet[1536]: E0212 19:47:18.208977 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:19.171126 kubelet[1536]: I0212 19:47:19.171002 1536 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 19:47:19.209580 kubelet[1536]: E0212 19:47:19.209501 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:19.503641 kubelet[1536]: E0212 19:47:19.503446 1536 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"64.23.171.188\" not found" Feb 12 19:47:19.629020 kubelet[1536]: E0212 19:47:19.628962 1536 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "64.23.171.188" not found Feb 12 19:47:20.211807 kubelet[1536]: E0212 19:47:20.211748 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:20.653457 kubelet[1536]: E0212 19:47:20.652322 1536 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "64.23.171.188" not found Feb 12 19:47:21.214557 kubelet[1536]: E0212 19:47:21.214494 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:21.895941 kubelet[1536]: E0212 19:47:21.895858 1536 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"64.23.171.188\" not found" node="64.23.171.188" Feb 12 19:47:21.949891 kubelet[1536]: I0212 19:47:21.949851 1536 kubelet_node_status.go:70] "Attempting to register node" node="64.23.171.188" Feb 12 19:47:22.056476 kubelet[1536]: I0212 19:47:22.056399 1536 kubelet_node_status.go:73] "Successfully registered node" node="64.23.171.188" Feb 12 19:47:22.074518 kubelet[1536]: E0212 19:47:22.074457 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.145388 sudo[1332]: pam_unix(sudo:session): session closed for user root Feb 12 19:47:22.152388 sshd[1326]: pam_unix(sshd:session): session closed for user core Feb 12 19:47:22.157293 systemd[1]: sshd@4-64.23.171.188:22-139.178.68.195:60198.service: Deactivated successfully. Feb 12 19:47:22.158506 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:47:22.160613 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:47:22.161996 systemd-logind[1191]: Removed session 5. Feb 12 19:47:22.174963 kubelet[1536]: E0212 19:47:22.174860 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.215339 kubelet[1536]: E0212 19:47:22.215258 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:22.275805 kubelet[1536]: E0212 19:47:22.275745 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.377374 kubelet[1536]: E0212 19:47:22.377294 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.478687 kubelet[1536]: E0212 19:47:22.477814 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.580171 kubelet[1536]: E0212 19:47:22.580017 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.681873 kubelet[1536]: E0212 19:47:22.681789 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.782595 kubelet[1536]: E0212 19:47:22.781977 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.884242 kubelet[1536]: E0212 19:47:22.883952 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:22.984866 kubelet[1536]: E0212 19:47:22.984765 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.086092 kubelet[1536]: E0212 19:47:23.085549 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.187168 kubelet[1536]: E0212 19:47:23.187048 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.216841 kubelet[1536]: E0212 19:47:23.216299 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:23.288190 kubelet[1536]: E0212 19:47:23.287969 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.388893 kubelet[1536]: E0212 19:47:23.388183 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.490071 kubelet[1536]: E0212 19:47:23.489867 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.591000 kubelet[1536]: E0212 19:47:23.590892 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.691641 kubelet[1536]: E0212 19:47:23.691160 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.792141 kubelet[1536]: E0212 19:47:23.792016 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.893649 kubelet[1536]: E0212 19:47:23.893517 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:23.994898 kubelet[1536]: E0212 19:47:23.994479 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:24.095401 kubelet[1536]: E0212 19:47:24.095206 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:24.195624 kubelet[1536]: E0212 19:47:24.195521 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:24.217138 kubelet[1536]: E0212 19:47:24.216975 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:24.295807 kubelet[1536]: E0212 19:47:24.295742 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:24.397716 kubelet[1536]: E0212 19:47:24.397612 1536 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"64.23.171.188\" not found" Feb 12 19:47:24.503057 kubelet[1536]: I0212 19:47:24.499317 1536 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 19:47:24.505627 env[1209]: time="2024-02-12T19:47:24.502914634Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:47:24.506780 kubelet[1536]: I0212 19:47:24.506737 1536 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 19:47:25.217787 kubelet[1536]: E0212 19:47:25.217734 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:25.218699 kubelet[1536]: I0212 19:47:25.218668 1536 apiserver.go:52] "Watching apiserver" Feb 12 19:47:25.227925 kubelet[1536]: I0212 19:47:25.227883 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:47:25.228229 kubelet[1536]: I0212 19:47:25.228151 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:47:25.316628 kubelet[1536]: I0212 19:47:25.316534 1536 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:47:25.355811 kubelet[1536]: I0212 19:47:25.355403 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqgfj\" (UniqueName: \"kubernetes.io/projected/f9769a2b-1939-4092-ad3a-8ae55ff9e363-kube-api-access-hqgfj\") pod \"kube-proxy-t8z4f\" (UID: \"f9769a2b-1939-4092-ad3a-8ae55ff9e363\") " pod="kube-system/kube-proxy-t8z4f" Feb 12 19:47:25.355811 kubelet[1536]: I0212 19:47:25.355473 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-run\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.355811 kubelet[1536]: I0212 19:47:25.355501 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-cgroup\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.355811 kubelet[1536]: I0212 19:47:25.355529 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-lib-modules\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.355811 kubelet[1536]: I0212 19:47:25.355575 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-xtables-lock\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.355811 kubelet[1536]: I0212 19:47:25.355602 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-kernel\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.356612 kubelet[1536]: I0212 19:47:25.355650 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cxg4\" (UniqueName: \"kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-kube-api-access-8cxg4\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357146 kubelet[1536]: I0212 19:47:25.357077 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9769a2b-1939-4092-ad3a-8ae55ff9e363-kube-proxy\") pod \"kube-proxy-t8z4f\" (UID: \"f9769a2b-1939-4092-ad3a-8ae55ff9e363\") " pod="kube-system/kube-proxy-t8z4f" Feb 12 19:47:25.357409 kubelet[1536]: I0212 19:47:25.357377 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cni-path\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357542 kubelet[1536]: I0212 19:47:25.357448 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-etc-cni-netd\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357542 kubelet[1536]: I0212 19:47:25.357502 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-clustermesh-secrets\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357649 kubelet[1536]: I0212 19:47:25.357561 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hubble-tls\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357649 kubelet[1536]: I0212 19:47:25.357601 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-bpf-maps\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357745 kubelet[1536]: I0212 19:47:25.357652 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9769a2b-1939-4092-ad3a-8ae55ff9e363-lib-modules\") pod \"kube-proxy-t8z4f\" (UID: \"f9769a2b-1939-4092-ad3a-8ae55ff9e363\") " pod="kube-system/kube-proxy-t8z4f" Feb 12 19:47:25.357745 kubelet[1536]: I0212 19:47:25.357713 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-config-path\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357838 kubelet[1536]: I0212 19:47:25.357755 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9769a2b-1939-4092-ad3a-8ae55ff9e363-xtables-lock\") pod \"kube-proxy-t8z4f\" (UID: \"f9769a2b-1939-4092-ad3a-8ae55ff9e363\") " pod="kube-system/kube-proxy-t8z4f" Feb 12 19:47:25.357838 kubelet[1536]: I0212 19:47:25.357805 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-net\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357945 kubelet[1536]: I0212 19:47:25.357844 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hostproc\") pod \"cilium-2mpwb\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " pod="kube-system/cilium-2mpwb" Feb 12 19:47:25.357945 kubelet[1536]: I0212 19:47:25.357877 1536 reconciler.go:41] "Reconciler: start to sync state" Feb 12 19:47:25.550750 kubelet[1536]: E0212 19:47:25.550091 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:25.552912 env[1209]: time="2024-02-12T19:47:25.552695598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mpwb,Uid:9a70ba3b-4d6c-4dc0-8ec0-27b96792b162,Namespace:kube-system,Attempt:0,}" Feb 12 19:47:25.838189 kubelet[1536]: E0212 19:47:25.837733 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:25.841942 env[1209]: time="2024-02-12T19:47:25.841859182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8z4f,Uid:f9769a2b-1939-4092-ad3a-8ae55ff9e363,Namespace:kube-system,Attempt:0,}" Feb 12 19:47:26.232010 kubelet[1536]: E0212 19:47:26.221224 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:26.491123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275562172.mount: Deactivated successfully. Feb 12 19:47:26.497742 env[1209]: time="2024-02-12T19:47:26.496180038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.502359 env[1209]: time="2024-02-12T19:47:26.500869578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.506401 env[1209]: time="2024-02-12T19:47:26.506327687Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.532219 env[1209]: time="2024-02-12T19:47:26.532163952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.535398 env[1209]: time="2024-02-12T19:47:26.535329783Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.548104 env[1209]: time="2024-02-12T19:47:26.548017347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.551243 env[1209]: time="2024-02-12T19:47:26.551183082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.559842 env[1209]: time="2024-02-12T19:47:26.559777129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:26.665877 env[1209]: time="2024-02-12T19:47:26.665746887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:47:26.666217 env[1209]: time="2024-02-12T19:47:26.665847650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:47:26.666217 env[1209]: time="2024-02-12T19:47:26.665870365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:47:26.666420 env[1209]: time="2024-02-12T19:47:26.666172480Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fb352954d528068de48f992b16b29115c47bb109587eba03e808db29a048359 pid=1628 runtime=io.containerd.runc.v2 Feb 12 19:47:26.689690 env[1209]: time="2024-02-12T19:47:26.689539321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:47:26.690216 env[1209]: time="2024-02-12T19:47:26.690156086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:47:26.690503 env[1209]: time="2024-02-12T19:47:26.690459981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:47:26.691465 env[1209]: time="2024-02-12T19:47:26.691375050Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed pid=1643 runtime=io.containerd.runc.v2 Feb 12 19:47:26.936136 env[1209]: time="2024-02-12T19:47:26.936048818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mpwb,Uid:9a70ba3b-4d6c-4dc0-8ec0-27b96792b162,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\"" Feb 12 19:47:26.958325 kubelet[1536]: E0212 19:47:26.939069 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:26.962091 env[1209]: time="2024-02-12T19:47:26.961960048Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:47:27.001833 env[1209]: time="2024-02-12T19:47:27.001757273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8z4f,Uid:f9769a2b-1939-4092-ad3a-8ae55ff9e363,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fb352954d528068de48f992b16b29115c47bb109587eba03e808db29a048359\"" Feb 12 19:47:27.003923 kubelet[1536]: E0212 19:47:27.003476 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:27.222829 kubelet[1536]: E0212 19:47:27.222462 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:28.228566 kubelet[1536]: E0212 19:47:28.228206 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:29.205586 kubelet[1536]: E0212 19:47:29.194931 1536 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:29.229797 kubelet[1536]: E0212 19:47:29.229710 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:30.248388 kubelet[1536]: E0212 19:47:30.229906 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:31.230676 kubelet[1536]: E0212 19:47:31.230575 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:32.276078 kubelet[1536]: E0212 19:47:32.276033 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:33.287393 kubelet[1536]: E0212 19:47:33.280387 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:34.281206 kubelet[1536]: E0212 19:47:34.281113 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:35.282268 kubelet[1536]: E0212 19:47:35.282195 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:36.283140 kubelet[1536]: E0212 19:47:36.283054 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:36.379750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192093406.mount: Deactivated successfully. Feb 12 19:47:37.283383 kubelet[1536]: E0212 19:47:37.283297 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:38.284123 kubelet[1536]: E0212 19:47:38.284059 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:39.299770 kubelet[1536]: E0212 19:47:39.299720 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:39.450260 update_engine[1192]: I0212 19:47:39.421114 1192 update_attempter.cc:509] Updating boot flags... Feb 12 19:47:40.306106 kubelet[1536]: E0212 19:47:40.305152 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:41.306128 kubelet[1536]: E0212 19:47:41.306006 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:42.306983 kubelet[1536]: E0212 19:47:42.306909 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:42.395430 env[1209]: time="2024-02-12T19:47:42.395278337Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:42.399777 env[1209]: time="2024-02-12T19:47:42.399704260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:42.404445 env[1209]: time="2024-02-12T19:47:42.404382308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:42.405829 env[1209]: time="2024-02-12T19:47:42.405759781Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:47:42.408512 env[1209]: time="2024-02-12T19:47:42.408471591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 19:47:42.410903 env[1209]: time="2024-02-12T19:47:42.410818532Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:47:42.484716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632727268.mount: Deactivated successfully. Feb 12 19:47:42.516143 env[1209]: time="2024-02-12T19:47:42.516065302Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\"" Feb 12 19:47:42.517802 env[1209]: time="2024-02-12T19:47:42.517696578Z" level=info msg="StartContainer for \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\"" Feb 12 19:47:42.694927 env[1209]: time="2024-02-12T19:47:42.694205244Z" level=info msg="StartContainer for \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\" returns successfully" Feb 12 19:47:43.038840 env[1209]: time="2024-02-12T19:47:43.038315531Z" level=info msg="shim disconnected" id=b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735 Feb 12 19:47:43.039382 env[1209]: time="2024-02-12T19:47:43.039333685Z" level=warning msg="cleaning up after shim disconnected" id=b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735 namespace=k8s.io Feb 12 19:47:43.050012 env[1209]: time="2024-02-12T19:47:43.048893577Z" level=info msg="cleaning up dead shim" Feb 12 19:47:43.070794 env[1209]: time="2024-02-12T19:47:43.070619604Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:47:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1772 runtime=io.containerd.runc.v2\n" Feb 12 19:47:43.251604 kubelet[1536]: E0212 19:47:43.251426 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:43.261773 env[1209]: time="2024-02-12T19:47:43.260624458Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:47:43.308020 kubelet[1536]: E0212 19:47:43.307932 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:43.348657 env[1209]: time="2024-02-12T19:47:43.348589182Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\"" Feb 12 19:47:43.355420 env[1209]: time="2024-02-12T19:47:43.355361909Z" level=info msg="StartContainer for \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\"" Feb 12 19:47:43.483247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735-rootfs.mount: Deactivated successfully. Feb 12 19:47:43.572284 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:47:43.573000 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:47:43.573537 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:47:43.579277 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:47:43.584532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:47:43.614360 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:47:43.618828 env[1209]: time="2024-02-12T19:47:43.618740902Z" level=info msg="StartContainer for \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\" returns successfully" Feb 12 19:47:43.709138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9-rootfs.mount: Deactivated successfully. Feb 12 19:47:43.727203 env[1209]: time="2024-02-12T19:47:43.727114818Z" level=info msg="shim disconnected" id=e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9 Feb 12 19:47:43.727203 env[1209]: time="2024-02-12T19:47:43.727207386Z" level=warning msg="cleaning up after shim disconnected" id=e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9 namespace=k8s.io Feb 12 19:47:43.727622 env[1209]: time="2024-02-12T19:47:43.727224352Z" level=info msg="cleaning up dead shim" Feb 12 19:47:43.762563 env[1209]: time="2024-02-12T19:47:43.762484077Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:47:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1839 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:47:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 12 19:47:44.260378 kubelet[1536]: E0212 19:47:44.260334 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:44.280237 env[1209]: time="2024-02-12T19:47:44.280168815Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:47:44.330007 kubelet[1536]: E0212 19:47:44.314968 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:44.404093 env[1209]: time="2024-02-12T19:47:44.403950614Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\"" Feb 12 19:47:44.417650 env[1209]: time="2024-02-12T19:47:44.415101451Z" level=info msg="StartContainer for \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\"" Feb 12 19:47:44.522938 systemd[1]: run-containerd-runc-k8s.io-1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82-runc.hJTQBl.mount: Deactivated successfully. Feb 12 19:47:44.645838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102090275.mount: Deactivated successfully. Feb 12 19:47:44.688823 env[1209]: time="2024-02-12T19:47:44.688739441Z" level=info msg="StartContainer for \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\" returns successfully" Feb 12 19:47:44.738353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82-rootfs.mount: Deactivated successfully. Feb 12 19:47:44.926144 env[1209]: time="2024-02-12T19:47:44.926068675Z" level=info msg="shim disconnected" id=1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82 Feb 12 19:47:44.926144 env[1209]: time="2024-02-12T19:47:44.926134669Z" level=warning msg="cleaning up after shim disconnected" id=1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82 namespace=k8s.io Feb 12 19:47:44.926144 env[1209]: time="2024-02-12T19:47:44.926149091Z" level=info msg="cleaning up dead shim" Feb 12 19:47:44.977900 env[1209]: time="2024-02-12T19:47:44.958691903Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:47:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1900 runtime=io.containerd.runc.v2\n" Feb 12 19:47:45.266711 kubelet[1536]: E0212 19:47:45.265404 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:45.270619 env[1209]: time="2024-02-12T19:47:45.269728261Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:47:45.322935 kubelet[1536]: E0212 19:47:45.322870 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:45.351516 env[1209]: time="2024-02-12T19:47:45.351426578Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\"" Feb 12 19:47:45.358599 env[1209]: time="2024-02-12T19:47:45.353996042Z" level=info msg="StartContainer for \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\"" Feb 12 19:47:45.603021 env[1209]: time="2024-02-12T19:47:45.602905687Z" level=info msg="StartContainer for \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\" returns successfully" Feb 12 19:47:45.675984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c-rootfs.mount: Deactivated successfully. Feb 12 19:47:45.818228 env[1209]: time="2024-02-12T19:47:45.817463496Z" level=info msg="shim disconnected" id=cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c Feb 12 19:47:45.818228 env[1209]: time="2024-02-12T19:47:45.817574313Z" level=warning msg="cleaning up after shim disconnected" id=cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c namespace=k8s.io Feb 12 19:47:45.818228 env[1209]: time="2024-02-12T19:47:45.817592257Z" level=info msg="cleaning up dead shim" Feb 12 19:47:45.881922 env[1209]: time="2024-02-12T19:47:45.881190627Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:47:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1961 runtime=io.containerd.runc.v2\n" Feb 12 19:47:46.098225 env[1209]: time="2024-02-12T19:47:46.097562981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:46.109069 env[1209]: time="2024-02-12T19:47:46.108863737Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:46.115759 env[1209]: time="2024-02-12T19:47:46.115589557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:46.121764 env[1209]: time="2024-02-12T19:47:46.121547001Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:47:46.123593 env[1209]: time="2024-02-12T19:47:46.123502181Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 19:47:46.128122 env[1209]: time="2024-02-12T19:47:46.128023579Z" level=info msg="CreateContainer within sandbox \"1fb352954d528068de48f992b16b29115c47bb109587eba03e808db29a048359\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:47:46.173134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149391973.mount: Deactivated successfully. Feb 12 19:47:46.215013 env[1209]: time="2024-02-12T19:47:46.213864874Z" level=info msg="CreateContainer within sandbox \"1fb352954d528068de48f992b16b29115c47bb109587eba03e808db29a048359\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5af4ea2f512025b9b332b14d7ce6fb69dca015df3c96d531f648c06bbd581e1c\"" Feb 12 19:47:46.215846 env[1209]: time="2024-02-12T19:47:46.215611846Z" level=info msg="StartContainer for \"5af4ea2f512025b9b332b14d7ce6fb69dca015df3c96d531f648c06bbd581e1c\"" Feb 12 19:47:46.287956 kubelet[1536]: E0212 19:47:46.285062 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:46.299114 env[1209]: time="2024-02-12T19:47:46.297794846Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:47:46.325386 kubelet[1536]: E0212 19:47:46.325308 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:46.379202 env[1209]: time="2024-02-12T19:47:46.379130893Z" level=info msg="CreateContainer within sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\"" Feb 12 19:47:46.381124 env[1209]: time="2024-02-12T19:47:46.380808041Z" level=info msg="StartContainer for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\"" Feb 12 19:47:46.422341 env[1209]: time="2024-02-12T19:47:46.422259802Z" level=info msg="StartContainer for \"5af4ea2f512025b9b332b14d7ce6fb69dca015df3c96d531f648c06bbd581e1c\" returns successfully" Feb 12 19:47:46.661789 env[1209]: time="2024-02-12T19:47:46.661707053Z" level=info msg="StartContainer for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" returns successfully" Feb 12 19:47:46.717424 systemd[1]: run-containerd-runc-k8s.io-89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242-runc.YJpvNh.mount: Deactivated successfully. Feb 12 19:47:46.950439 kubelet[1536]: I0212 19:47:46.950209 1536 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:47:47.306615 kubelet[1536]: E0212 19:47:47.304578 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:47.339596 kubelet[1536]: E0212 19:47:47.339553 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:47.345161 kubelet[1536]: E0212 19:47:47.345123 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:47.355213 kubelet[1536]: I0212 19:47:47.355150 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-t8z4f" podStartSLOduration=-9.223372011499708e+09 pod.CreationTimestamp="2024-02-12 19:47:22 +0000 UTC" firstStartedPulling="2024-02-12 19:47:27.004736725 +0000 UTC m=+19.381151853" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:47:47.354171372 +0000 UTC m=+39.730586520" watchObservedRunningTime="2024-02-12 19:47:47.355068557 +0000 UTC m=+39.731483710" Feb 12 19:47:47.401769 kubelet[1536]: I0212 19:47:47.401717 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2mpwb" podStartSLOduration=-9.22337201145315e+09 pod.CreationTimestamp="2024-02-12 19:47:22 +0000 UTC" firstStartedPulling="2024-02-12 19:47:26.960742391 +0000 UTC m=+19.337157529" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:47:47.401092109 +0000 UTC m=+39.777507271" watchObservedRunningTime="2024-02-12 19:47:47.401625257 +0000 UTC m=+39.778040409" Feb 12 19:47:47.956067 kernel: Initializing XFRM netlink socket Feb 12 19:47:48.341124 kubelet[1536]: E0212 19:47:48.340921 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:48.349292 kubelet[1536]: E0212 19:47:48.349239 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:48.352536 kubelet[1536]: E0212 19:47:48.352302 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:49.195267 kubelet[1536]: E0212 19:47:49.195174 1536 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:49.345350 kubelet[1536]: E0212 19:47:49.345214 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:49.366148 kubelet[1536]: E0212 19:47:49.365541 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:49.869371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:47:49.866002 systemd-networkd[1085]: cilium_host: Link UP Feb 12 19:47:49.866374 systemd-networkd[1085]: cilium_net: Link UP Feb 12 19:47:49.866382 systemd-networkd[1085]: cilium_net: Gained carrier Feb 12 19:47:49.866634 systemd-networkd[1085]: cilium_host: Gained carrier Feb 12 19:47:49.892674 systemd-networkd[1085]: cilium_host: Gained IPv6LL Feb 12 19:47:50.270912 systemd-networkd[1085]: cilium_vxlan: Link UP Feb 12 19:47:50.270928 systemd-networkd[1085]: cilium_vxlan: Gained carrier Feb 12 19:47:50.347159 kubelet[1536]: E0212 19:47:50.346941 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:50.381221 systemd-networkd[1085]: cilium_net: Gained IPv6LL Feb 12 19:47:50.914020 kernel: NET: Registered PF_ALG protocol family Feb 12 19:47:51.107627 kubelet[1536]: I0212 19:47:51.102914 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:47:51.135479 kubelet[1536]: I0212 19:47:51.135377 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjgt\" (UniqueName: \"kubernetes.io/projected/fb8129d3-e3cc-42a7-b7df-d0de053eaba2-kube-api-access-jqjgt\") pod \"nginx-deployment-8ffc5cf85-9v6vn\" (UID: \"fb8129d3-e3cc-42a7-b7df-d0de053eaba2\") " pod="default/nginx-deployment-8ffc5cf85-9v6vn" Feb 12 19:47:51.354728 kubelet[1536]: E0212 19:47:51.347933 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:51.408640 env[1209]: time="2024-02-12T19:47:51.408097321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-9v6vn,Uid:fb8129d3-e3cc-42a7-b7df-d0de053eaba2,Namespace:default,Attempt:0,}" Feb 12 19:47:51.859693 systemd-networkd[1085]: cilium_vxlan: Gained IPv6LL Feb 12 19:47:52.356345 kubelet[1536]: E0212 19:47:52.356288 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:52.951060 systemd-networkd[1085]: lxc_health: Link UP Feb 12 19:47:52.960008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:47:52.963751 systemd-networkd[1085]: lxc_health: Gained carrier Feb 12 19:47:53.357955 kubelet[1536]: E0212 19:47:53.357867 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:53.553354 kubelet[1536]: E0212 19:47:53.553310 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:53.608775 systemd-networkd[1085]: lxc8a1075eee799: Link UP Feb 12 19:47:53.619070 kernel: eth0: renamed from tmpbcf33 Feb 12 19:47:53.625458 systemd-networkd[1085]: lxc8a1075eee799: Gained carrier Feb 12 19:47:53.626085 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8a1075eee799: link becomes ready Feb 12 19:47:54.282318 systemd-networkd[1085]: lxc_health: Gained IPv6LL Feb 12 19:47:54.358949 kubelet[1536]: E0212 19:47:54.358888 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:54.379862 kubelet[1536]: E0212 19:47:54.379815 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:55.360963 kubelet[1536]: E0212 19:47:55.360853 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:55.387315 kubelet[1536]: E0212 19:47:55.387280 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:47:55.498332 systemd-networkd[1085]: lxc8a1075eee799: Gained IPv6LL Feb 12 19:47:56.362291 kubelet[1536]: E0212 19:47:56.362184 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:57.364593 kubelet[1536]: E0212 19:47:57.364465 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:58.365213 kubelet[1536]: E0212 19:47:58.365066 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:47:59.366418 kubelet[1536]: E0212 19:47:59.366354 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:00.367483 kubelet[1536]: E0212 19:48:00.367412 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:00.998181 env[1209]: time="2024-02-12T19:48:00.988176905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:00.998181 env[1209]: time="2024-02-12T19:48:00.988276562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:00.998181 env[1209]: time="2024-02-12T19:48:00.988309898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:00.998181 env[1209]: time="2024-02-12T19:48:00.995368242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcf333d98ff6cb1169cb1f9287163de2c20a4dad53d46e4114bc530666b0b75e pid=2626 runtime=io.containerd.runc.v2 Feb 12 19:48:01.137175 env[1209]: time="2024-02-12T19:48:01.137095081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-9v6vn,Uid:fb8129d3-e3cc-42a7-b7df-d0de053eaba2,Namespace:default,Attempt:0,} returns sandbox id \"bcf333d98ff6cb1169cb1f9287163de2c20a4dad53d46e4114bc530666b0b75e\"" Feb 12 19:48:01.146962 env[1209]: time="2024-02-12T19:48:01.146233044Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:48:01.369470 kubelet[1536]: E0212 19:48:01.369413 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:02.371219 kubelet[1536]: E0212 19:48:02.371110 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:03.378358 kubelet[1536]: E0212 19:48:03.374861 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:04.378630 kubelet[1536]: E0212 19:48:04.378544 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:05.379894 kubelet[1536]: E0212 19:48:05.379826 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:06.386861 kubelet[1536]: E0212 19:48:06.386748 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:06.548068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249903476.mount: Deactivated successfully. Feb 12 19:48:07.387199 kubelet[1536]: E0212 19:48:07.387097 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:08.377563 env[1209]: time="2024-02-12T19:48:08.376480029Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:08.386700 env[1209]: time="2024-02-12T19:48:08.386465708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:08.387859 kubelet[1536]: E0212 19:48:08.387779 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:08.391635 env[1209]: time="2024-02-12T19:48:08.388327367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:08.392443 env[1209]: time="2024-02-12T19:48:08.392351410Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:08.394326 env[1209]: time="2024-02-12T19:48:08.394206400Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:48:08.398171 env[1209]: time="2024-02-12T19:48:08.398070827Z" level=info msg="CreateContainer within sandbox \"bcf333d98ff6cb1169cb1f9287163de2c20a4dad53d46e4114bc530666b0b75e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 19:48:08.442322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount411876149.mount: Deactivated successfully. Feb 12 19:48:08.460026 env[1209]: time="2024-02-12T19:48:08.459931830Z" level=info msg="CreateContainer within sandbox \"bcf333d98ff6cb1169cb1f9287163de2c20a4dad53d46e4114bc530666b0b75e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"52c397ee0204865ca554eb89afe27afd8c9bb6389359ab8c23e45e23e49936fa\"" Feb 12 19:48:08.461623 env[1209]: time="2024-02-12T19:48:08.461530068Z" level=info msg="StartContainer for \"52c397ee0204865ca554eb89afe27afd8c9bb6389359ab8c23e45e23e49936fa\"" Feb 12 19:48:08.622378 env[1209]: time="2024-02-12T19:48:08.622308559Z" level=info msg="StartContainer for \"52c397ee0204865ca554eb89afe27afd8c9bb6389359ab8c23e45e23e49936fa\" returns successfully" Feb 12 19:48:09.195619 kubelet[1536]: E0212 19:48:09.195564 1536 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:09.389229 kubelet[1536]: E0212 19:48:09.389179 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:10.390990 kubelet[1536]: E0212 19:48:10.390523 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:11.393544 kubelet[1536]: E0212 19:48:11.393394 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:12.394133 kubelet[1536]: E0212 19:48:12.393959 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:13.394525 kubelet[1536]: E0212 19:48:13.394470 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:14.395787 kubelet[1536]: E0212 19:48:14.395734 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:15.400132 kubelet[1536]: E0212 19:48:15.400016 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:15.743620 kubelet[1536]: I0212 19:48:15.743058 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-9v6vn" podStartSLOduration=-9.223372012111807e+09 pod.CreationTimestamp="2024-02-12 19:47:51 +0000 UTC" firstStartedPulling="2024-02-12 19:48:01.139792271 +0000 UTC m=+53.516207407" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:48:09.552669582 +0000 UTC m=+61.929084735" watchObservedRunningTime="2024-02-12 19:48:15.742968436 +0000 UTC m=+68.119383590" Feb 12 19:48:15.744398 kubelet[1536]: I0212 19:48:15.744351 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:15.839364 kubelet[1536]: I0212 19:48:15.839307 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cxb\" (UniqueName: \"kubernetes.io/projected/91954286-3b8d-40ff-b6e2-f447046cdcfa-kube-api-access-l7cxb\") pod \"nfs-server-provisioner-0\" (UID: \"91954286-3b8d-40ff-b6e2-f447046cdcfa\") " pod="default/nfs-server-provisioner-0" Feb 12 19:48:15.839364 kubelet[1536]: I0212 19:48:15.839374 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/91954286-3b8d-40ff-b6e2-f447046cdcfa-data\") pod \"nfs-server-provisioner-0\" (UID: \"91954286-3b8d-40ff-b6e2-f447046cdcfa\") " pod="default/nfs-server-provisioner-0" Feb 12 19:48:16.075601 env[1209]: time="2024-02-12T19:48:16.074957952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:91954286-3b8d-40ff-b6e2-f447046cdcfa,Namespace:default,Attempt:0,}" Feb 12 19:48:16.167822 systemd-networkd[1085]: lxc6af04fcc066c: Link UP Feb 12 19:48:16.182091 kernel: eth0: renamed from tmp08f13 Feb 12 19:48:16.189424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:48:16.189589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6af04fcc066c: link becomes ready Feb 12 19:48:16.189837 systemd-networkd[1085]: lxc6af04fcc066c: Gained carrier Feb 12 19:48:16.400903 kubelet[1536]: E0212 19:48:16.400722 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:16.761124 env[1209]: time="2024-02-12T19:48:16.760251151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:16.761124 env[1209]: time="2024-02-12T19:48:16.760397286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:16.761124 env[1209]: time="2024-02-12T19:48:16.760429612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:16.761124 env[1209]: time="2024-02-12T19:48:16.760645573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08f13ea256f0ed237a13a7f9cb87b73e9a2a925ef01e4a2494c7bdce1aabe48a pid=2804 runtime=io.containerd.runc.v2 Feb 12 19:48:16.990588 env[1209]: time="2024-02-12T19:48:16.985896083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:91954286-3b8d-40ff-b6e2-f447046cdcfa,Namespace:default,Attempt:0,} returns sandbox id \"08f13ea256f0ed237a13a7f9cb87b73e9a2a925ef01e4a2494c7bdce1aabe48a\"" Feb 12 19:48:16.990588 env[1209]: time="2024-02-12T19:48:16.989913124Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 19:48:17.010304 systemd[1]: run-containerd-runc-k8s.io-08f13ea256f0ed237a13a7f9cb87b73e9a2a925ef01e4a2494c7bdce1aabe48a-runc.LG6zkz.mount: Deactivated successfully. Feb 12 19:48:17.406349 kubelet[1536]: E0212 19:48:17.406179 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:17.792386 systemd-networkd[1085]: lxc6af04fcc066c: Gained IPv6LL Feb 12 19:48:18.406482 kubelet[1536]: E0212 19:48:18.406405 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:19.407556 kubelet[1536]: E0212 19:48:19.407434 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:20.408307 kubelet[1536]: E0212 19:48:20.408176 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:21.227320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612315961.mount: Deactivated successfully. Feb 12 19:48:21.408472 kubelet[1536]: E0212 19:48:21.408368 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:22.408969 kubelet[1536]: E0212 19:48:22.408888 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:23.409875 kubelet[1536]: E0212 19:48:23.409811 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:24.410558 kubelet[1536]: E0212 19:48:24.410463 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:25.204720 env[1209]: time="2024-02-12T19:48:25.204652003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:25.208359 env[1209]: time="2024-02-12T19:48:25.208288826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:25.214498 env[1209]: time="2024-02-12T19:48:25.214418401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:25.218338 env[1209]: time="2024-02-12T19:48:25.218268054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:25.219576 env[1209]: time="2024-02-12T19:48:25.219512079Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 19:48:25.224163 env[1209]: time="2024-02-12T19:48:25.224104599Z" level=info msg="CreateContainer within sandbox \"08f13ea256f0ed237a13a7f9cb87b73e9a2a925ef01e4a2494c7bdce1aabe48a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 19:48:25.243753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404382745.mount: Deactivated successfully. Feb 12 19:48:25.255409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387027982.mount: Deactivated successfully. Feb 12 19:48:25.259340 env[1209]: time="2024-02-12T19:48:25.259267471Z" level=info msg="CreateContainer within sandbox \"08f13ea256f0ed237a13a7f9cb87b73e9a2a925ef01e4a2494c7bdce1aabe48a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4be84b74e897638301426f11e56ff06995e8e15786b5c60c9dfc1392a69729c9\"" Feb 12 19:48:25.261120 env[1209]: time="2024-02-12T19:48:25.260848380Z" level=info msg="StartContainer for \"4be84b74e897638301426f11e56ff06995e8e15786b5c60c9dfc1392a69729c9\"" Feb 12 19:48:25.361395 env[1209]: time="2024-02-12T19:48:25.361332866Z" level=info msg="StartContainer for \"4be84b74e897638301426f11e56ff06995e8e15786b5c60c9dfc1392a69729c9\" returns successfully" Feb 12 19:48:25.410873 kubelet[1536]: E0212 19:48:25.410810 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:26.411053 kubelet[1536]: E0212 19:48:26.410968 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:27.411951 kubelet[1536]: E0212 19:48:27.411897 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:28.412986 kubelet[1536]: E0212 19:48:28.412925 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:29.194532 kubelet[1536]: E0212 19:48:29.194454 1536 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:29.415247 kubelet[1536]: E0212 19:48:29.415098 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:30.416264 kubelet[1536]: E0212 19:48:30.416204 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:31.417800 kubelet[1536]: E0212 19:48:31.417675 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:32.418727 kubelet[1536]: E0212 19:48:32.418678 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:33.420399 kubelet[1536]: E0212 19:48:33.420207 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:34.421197 kubelet[1536]: E0212 19:48:34.421128 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:35.311638 kubelet[1536]: I0212 19:48:35.311576 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372016543251e+09 pod.CreationTimestamp="2024-02-12 19:48:15 +0000 UTC" firstStartedPulling="2024-02-12 19:48:16.988880336 +0000 UTC m=+69.365295464" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:48:25.662228135 +0000 UTC m=+78.038643289" watchObservedRunningTime="2024-02-12 19:48:35.311524241 +0000 UTC m=+87.687939418" Feb 12 19:48:35.312654 kubelet[1536]: I0212 19:48:35.312599 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:35.433380 kubelet[1536]: E0212 19:48:35.433295 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:35.474839 kubelet[1536]: I0212 19:48:35.474785 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e910cc6-c929-462a-9c35-5e2dbe89e2f6\" (UniqueName: \"kubernetes.io/nfs/feb7df29-8874-4843-a14d-80872bd43da4-pvc-1e910cc6-c929-462a-9c35-5e2dbe89e2f6\") pod \"test-pod-1\" (UID: \"feb7df29-8874-4843-a14d-80872bd43da4\") " pod="default/test-pod-1" Feb 12 19:48:35.475922 kubelet[1536]: I0212 19:48:35.475456 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdql\" (UniqueName: \"kubernetes.io/projected/feb7df29-8874-4843-a14d-80872bd43da4-kube-api-access-phdql\") pod \"test-pod-1\" (UID: \"feb7df29-8874-4843-a14d-80872bd43da4\") " pod="default/test-pod-1" Feb 12 19:48:35.704111 kernel: FS-Cache: Loaded Feb 12 19:48:35.813259 kernel: RPC: Registered named UNIX socket transport module. Feb 12 19:48:35.813489 kernel: RPC: Registered udp transport module. Feb 12 19:48:35.813708 kernel: RPC: Registered tcp transport module. Feb 12 19:48:35.813953 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 19:48:35.910668 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 19:48:36.257557 kernel: NFS: Registering the id_resolver key type Feb 12 19:48:36.257887 kernel: Key type id_resolver registered Feb 12 19:48:36.259067 kernel: Key type id_legacy registered Feb 12 19:48:36.434233 kubelet[1536]: E0212 19:48:36.434101 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:37.434918 kubelet[1536]: E0212 19:48:37.434849 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:38.435642 kubelet[1536]: E0212 19:48:38.435498 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:39.437400 kubelet[1536]: E0212 19:48:39.437341 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:40.439150 kubelet[1536]: E0212 19:48:40.439088 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:41.440722 kubelet[1536]: E0212 19:48:41.440655 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:42.442600 kubelet[1536]: E0212 19:48:42.442546 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:42.546199 nfsidmap[2945]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-3-7482959a87' Feb 12 19:48:43.444007 kubelet[1536]: E0212 19:48:43.443920 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:44.445347 kubelet[1536]: E0212 19:48:44.444975 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:45.445493 kubelet[1536]: E0212 19:48:45.445437 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:46.446850 kubelet[1536]: E0212 19:48:46.446789 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:47.449283 kubelet[1536]: E0212 19:48:47.449195 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:48.450304 kubelet[1536]: E0212 19:48:48.450235 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:48.690220 nfsidmap[2946]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-3-7482959a87' Feb 12 19:48:48.825089 env[1209]: time="2024-02-12T19:48:48.824503575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:feb7df29-8874-4843-a14d-80872bd43da4,Namespace:default,Attempt:0,}" Feb 12 19:48:48.955430 systemd-networkd[1085]: lxc128c472cde86: Link UP Feb 12 19:48:48.970178 kernel: eth0: renamed from tmpd3aed Feb 12 19:48:48.977133 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:48:48.977324 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc128c472cde86: link becomes ready Feb 12 19:48:48.977549 systemd-networkd[1085]: lxc128c472cde86: Gained carrier Feb 12 19:48:49.195009 kubelet[1536]: E0212 19:48:49.194817 1536 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:49.456457 kubelet[1536]: E0212 19:48:49.456292 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:49.457198 env[1209]: time="2024-02-12T19:48:49.451604315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:49.457198 env[1209]: time="2024-02-12T19:48:49.451672490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:49.457198 env[1209]: time="2024-02-12T19:48:49.451688590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:49.457198 env[1209]: time="2024-02-12T19:48:49.451919549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3aed717cbecfeb0f4a6aaa11a0930d9c1bdf2a92d0f6f269f6b4253f5993270 pid=2977 runtime=io.containerd.runc.v2 Feb 12 19:48:49.581626 env[1209]: time="2024-02-12T19:48:49.581543044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:feb7df29-8874-4843-a14d-80872bd43da4,Namespace:default,Attempt:0,} returns sandbox id \"d3aed717cbecfeb0f4a6aaa11a0930d9c1bdf2a92d0f6f269f6b4253f5993270\"" Feb 12 19:48:49.585101 env[1209]: time="2024-02-12T19:48:49.584967684Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 19:48:50.026551 systemd-networkd[1085]: lxc128c472cde86: Gained IPv6LL Feb 12 19:48:50.079892 env[1209]: time="2024-02-12T19:48:50.079781104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:50.088078 env[1209]: time="2024-02-12T19:48:50.087938195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:50.093335 env[1209]: time="2024-02-12T19:48:50.093254599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:50.098872 env[1209]: time="2024-02-12T19:48:50.098801895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:48:50.099976 env[1209]: time="2024-02-12T19:48:50.099923324Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 19:48:50.113449 env[1209]: time="2024-02-12T19:48:50.111780403Z" level=info msg="CreateContainer within sandbox \"d3aed717cbecfeb0f4a6aaa11a0930d9c1bdf2a92d0f6f269f6b4253f5993270\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 19:48:50.157776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435261150.mount: Deactivated successfully. Feb 12 19:48:50.181575 env[1209]: time="2024-02-12T19:48:50.181492316Z" level=info msg="CreateContainer within sandbox \"d3aed717cbecfeb0f4a6aaa11a0930d9c1bdf2a92d0f6f269f6b4253f5993270\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6728cd6ee7e7f5d27f5648d12d5f853e1db995b0135ac564d0b885b6a0f1da17\"" Feb 12 19:48:50.183216 env[1209]: time="2024-02-12T19:48:50.183005047Z" level=info msg="StartContainer for \"6728cd6ee7e7f5d27f5648d12d5f853e1db995b0135ac564d0b885b6a0f1da17\"" Feb 12 19:48:50.305435 env[1209]: time="2024-02-12T19:48:50.305349409Z" level=info msg="StartContainer for \"6728cd6ee7e7f5d27f5648d12d5f853e1db995b0135ac564d0b885b6a0f1da17\" returns successfully" Feb 12 19:48:50.456737 kubelet[1536]: E0212 19:48:50.456539 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:50.773078 kubelet[1536]: I0212 19:48:50.772684 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372002082153e+09 pod.CreationTimestamp="2024-02-12 19:48:16 +0000 UTC" firstStartedPulling="2024-02-12 19:48:49.584152303 +0000 UTC m=+101.960567449" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:48:50.771620846 +0000 UTC m=+103.148035994" watchObservedRunningTime="2024-02-12 19:48:50.772621974 +0000 UTC m=+103.149037125" Feb 12 19:48:51.457700 kubelet[1536]: E0212 19:48:51.457599 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:52.459153 kubelet[1536]: E0212 19:48:52.459094 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:53.460662 kubelet[1536]: E0212 19:48:53.460598 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:54.461646 kubelet[1536]: E0212 19:48:54.461575 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:55.179436 systemd[1]: run-containerd-runc-k8s.io-89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242-runc.z1Ciqd.mount: Deactivated successfully. Feb 12 19:48:55.213971 env[1209]: time="2024-02-12T19:48:55.213887835Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:48:55.224372 env[1209]: time="2024-02-12T19:48:55.224311365Z" level=info msg="StopContainer for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" with timeout 1 (s)" Feb 12 19:48:55.225188 env[1209]: time="2024-02-12T19:48:55.225137653Z" level=info msg="Stop container \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" with signal terminated" Feb 12 19:48:55.236301 systemd-networkd[1085]: lxc_health: Link DOWN Feb 12 19:48:55.236309 systemd-networkd[1085]: lxc_health: Lost carrier Feb 12 19:48:55.317941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242-rootfs.mount: Deactivated successfully. Feb 12 19:48:55.395776 env[1209]: time="2024-02-12T19:48:55.395711406Z" level=info msg="shim disconnected" id=89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242 Feb 12 19:48:55.395776 env[1209]: time="2024-02-12T19:48:55.395774304Z" level=warning msg="cleaning up after shim disconnected" id=89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242 namespace=k8s.io Feb 12 19:48:55.395776 env[1209]: time="2024-02-12T19:48:55.395788353Z" level=info msg="cleaning up dead shim" Feb 12 19:48:55.410560 env[1209]: time="2024-02-12T19:48:55.410477514Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3108 runtime=io.containerd.runc.v2\n" Feb 12 19:48:55.423077 env[1209]: time="2024-02-12T19:48:55.422968848Z" level=info msg="StopContainer for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" returns successfully" Feb 12 19:48:55.424875 env[1209]: time="2024-02-12T19:48:55.424818306Z" level=info msg="StopPodSandbox for \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\"" Feb 12 19:48:55.431053 env[1209]: time="2024-02-12T19:48:55.424942110Z" level=info msg="Container to stop \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.431053 env[1209]: time="2024-02-12T19:48:55.424972666Z" level=info msg="Container to stop \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.431053 env[1209]: time="2024-02-12T19:48:55.425008189Z" level=info msg="Container to stop \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.431053 env[1209]: time="2024-02-12T19:48:55.425053724Z" level=info msg="Container to stop \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.431053 env[1209]: time="2024-02-12T19:48:55.425071907Z" level=info msg="Container to stop \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:55.428582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed-shm.mount: Deactivated successfully. Feb 12 19:48:55.463244 kubelet[1536]: E0212 19:48:55.463157 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:55.481153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed-rootfs.mount: Deactivated successfully. Feb 12 19:48:55.516972 env[1209]: time="2024-02-12T19:48:55.516889468Z" level=info msg="shim disconnected" id=b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed Feb 12 19:48:55.517404 env[1209]: time="2024-02-12T19:48:55.517372840Z" level=warning msg="cleaning up after shim disconnected" id=b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed namespace=k8s.io Feb 12 19:48:55.517547 env[1209]: time="2024-02-12T19:48:55.517524559Z" level=info msg="cleaning up dead shim" Feb 12 19:48:55.545472 env[1209]: time="2024-02-12T19:48:55.545393438Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3140 runtime=io.containerd.runc.v2\n" Feb 12 19:48:55.546436 env[1209]: time="2024-02-12T19:48:55.546372154Z" level=info msg="TearDown network for sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" successfully" Feb 12 19:48:55.546869 env[1209]: time="2024-02-12T19:48:55.546654436Z" level=info msg="StopPodSandbox for \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" returns successfully" Feb 12 19:48:55.715098 kubelet[1536]: I0212 19:48:55.713342 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-bpf-maps\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715098 kubelet[1536]: I0212 19:48:55.713460 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-clustermesh-secrets\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715098 kubelet[1536]: I0212 19:48:55.713516 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-config-path\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715098 kubelet[1536]: I0212 19:48:55.713552 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-cgroup\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715098 kubelet[1536]: I0212 19:48:55.713585 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-lib-modules\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715098 kubelet[1536]: I0212 19:48:55.713611 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-run\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715722 kubelet[1536]: I0212 19:48:55.713630 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-kernel\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715722 kubelet[1536]: I0212 19:48:55.713675 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cxg4\" (UniqueName: \"kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-kube-api-access-8cxg4\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715722 kubelet[1536]: I0212 19:48:55.713699 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cni-path\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715722 kubelet[1536]: I0212 19:48:55.713742 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hubble-tls\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715722 kubelet[1536]: I0212 19:48:55.713783 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-net\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.715722 kubelet[1536]: I0212 19:48:55.713843 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hostproc\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.716113 kubelet[1536]: I0212 19:48:55.713871 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-xtables-lock\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.716113 kubelet[1536]: I0212 19:48:55.713909 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-etc-cni-netd\") pod \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\" (UID: \"9a70ba3b-4d6c-4dc0-8ec0-27b96792b162\") " Feb 12 19:48:55.716113 kubelet[1536]: I0212 19:48:55.714013 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.716113 kubelet[1536]: I0212 19:48:55.714310 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.716113 kubelet[1536]: I0212 19:48:55.714991 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.716113 kubelet[1536]: W0212 19:48:55.715643 1536 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:48:55.719021 kubelet[1536]: I0212 19:48:55.718132 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.719021 kubelet[1536]: I0212 19:48:55.718272 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.719021 kubelet[1536]: I0212 19:48:55.718308 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.719021 kubelet[1536]: I0212 19:48:55.718336 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.720627 kubelet[1536]: I0212 19:48:55.720569 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.720904 kubelet[1536]: I0212 19:48:55.720797 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.720904 kubelet[1536]: I0212 19:48:55.720845 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:48:55.721609 kubelet[1536]: I0212 19:48:55.721560 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:48:55.725442 kubelet[1536]: I0212 19:48:55.725384 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:55.730564 kubelet[1536]: I0212 19:48:55.730500 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:48:55.732384 kubelet[1536]: I0212 19:48:55.732325 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-kube-api-access-8cxg4" (OuterVolumeSpecName: "kube-api-access-8cxg4") pod "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" (UID: "9a70ba3b-4d6c-4dc0-8ec0-27b96792b162"). InnerVolumeSpecName "kube-api-access-8cxg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:48:55.784705 kubelet[1536]: I0212 19:48:55.784668 1536 scope.go:115] "RemoveContainer" containerID="89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242" Feb 12 19:48:55.793430 env[1209]: time="2024-02-12T19:48:55.793174516Z" level=info msg="RemoveContainer for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\"" Feb 12 19:48:55.805629 env[1209]: time="2024-02-12T19:48:55.804960972Z" level=info msg="RemoveContainer for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" returns successfully" Feb 12 19:48:55.809322 kubelet[1536]: I0212 19:48:55.809283 1536 scope.go:115] "RemoveContainer" containerID="cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c" Feb 12 19:48:55.813072 env[1209]: time="2024-02-12T19:48:55.812557329Z" level=info msg="RemoveContainer for \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\"" Feb 12 19:48:55.816934 kubelet[1536]: I0212 19:48:55.816719 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-run\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.817229 kubelet[1536]: I0212 19:48:55.816949 1536 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-kernel\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.820226 env[1209]: time="2024-02-12T19:48:55.819781257Z" level=info msg="RemoveContainer for \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\" returns successfully" Feb 12 19:48:55.821111 kubelet[1536]: I0212 19:48:55.816973 1536 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8cxg4\" (UniqueName: \"kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-kube-api-access-8cxg4\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.824313 kubelet[1536]: I0212 19:48:55.822285 1536 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cni-path\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.824313 kubelet[1536]: I0212 19:48:55.824223 1536 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-xtables-lock\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.824957 kubelet[1536]: I0212 19:48:55.824906 1536 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-etc-cni-netd\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.824957 kubelet[1536]: I0212 19:48:55.824956 1536 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hubble-tls\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.824957 kubelet[1536]: I0212 19:48:55.824976 1536 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-host-proc-sys-net\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.824992 1536 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-hostproc\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.825011 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-cgroup\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.825058 1536 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-lib-modules\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.825078 1536 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-bpf-maps\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.825095 1536 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-clustermesh-secrets\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.825111 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162-cilium-config-path\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:48:55.825285 kubelet[1536]: I0212 19:48:55.820195 1536 scope.go:115] "RemoveContainer" containerID="1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82" Feb 12 19:48:55.838436 env[1209]: time="2024-02-12T19:48:55.837864809Z" level=info msg="RemoveContainer for \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\"" Feb 12 19:48:55.844675 env[1209]: time="2024-02-12T19:48:55.844597431Z" level=info msg="RemoveContainer for \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\" returns successfully" Feb 12 19:48:55.845408 kubelet[1536]: I0212 19:48:55.845325 1536 scope.go:115] "RemoveContainer" containerID="e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9" Feb 12 19:48:55.847681 env[1209]: time="2024-02-12T19:48:55.847617792Z" level=info msg="RemoveContainer for \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\"" Feb 12 19:48:55.859161 env[1209]: time="2024-02-12T19:48:55.859071653Z" level=info msg="RemoveContainer for \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\" returns successfully" Feb 12 19:48:55.859856 kubelet[1536]: I0212 19:48:55.859729 1536 scope.go:115] "RemoveContainer" containerID="b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735" Feb 12 19:48:55.862693 env[1209]: time="2024-02-12T19:48:55.862605772Z" level=info msg="RemoveContainer for \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\"" Feb 12 19:48:55.872909 env[1209]: time="2024-02-12T19:48:55.872779638Z" level=info msg="RemoveContainer for \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\" returns successfully" Feb 12 19:48:55.873313 kubelet[1536]: I0212 19:48:55.873236 1536 scope.go:115] "RemoveContainer" containerID="89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242" Feb 12 19:48:55.873811 env[1209]: time="2024-02-12T19:48:55.873671748Z" level=error msg="ContainerStatus for \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\": not found" Feb 12 19:48:55.874134 kubelet[1536]: E0212 19:48:55.874078 1536 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\": not found" containerID="89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242" Feb 12 19:48:55.874277 kubelet[1536]: I0212 19:48:55.874173 1536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242} err="failed to get container status \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\": rpc error: code = NotFound desc = an error occurred when try to find container \"89357b93a4445305e6d63ddca26aa6f2a30240239db8ef0f231d3951ac9e3242\": not found" Feb 12 19:48:55.874277 kubelet[1536]: I0212 19:48:55.874212 1536 scope.go:115] "RemoveContainer" containerID="cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c" Feb 12 19:48:55.875468 env[1209]: time="2024-02-12T19:48:55.875359069Z" level=error msg="ContainerStatus for \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\": not found" Feb 12 19:48:55.876121 kubelet[1536]: E0212 19:48:55.875888 1536 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\": not found" containerID="cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c" Feb 12 19:48:55.876121 kubelet[1536]: I0212 19:48:55.875942 1536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c} err="failed to get container status \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbe8c694e4bbf51979e227b6430aed56719d9e7ec2ace8e56fb51c28019c2d5c\": not found" Feb 12 19:48:55.876121 kubelet[1536]: I0212 19:48:55.875964 1536 scope.go:115] "RemoveContainer" containerID="1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82" Feb 12 19:48:55.876478 env[1209]: time="2024-02-12T19:48:55.876344484Z" level=error msg="ContainerStatus for \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\": not found" Feb 12 19:48:55.876881 kubelet[1536]: E0212 19:48:55.876693 1536 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\": not found" containerID="1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82" Feb 12 19:48:55.876881 kubelet[1536]: I0212 19:48:55.876742 1536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82} err="failed to get container status \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\": rpc error: code = NotFound desc = an error occurred when try to find container \"1737047624b907e1a62da1c8801693907378edfb8254fa6fd06aa6b02e31cc82\": not found" Feb 12 19:48:55.876881 kubelet[1536]: I0212 19:48:55.876761 1536 scope.go:115] "RemoveContainer" containerID="e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9" Feb 12 19:48:55.878014 env[1209]: time="2024-02-12T19:48:55.877094975Z" level=error msg="ContainerStatus for \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\": not found" Feb 12 19:48:55.880687 kubelet[1536]: E0212 19:48:55.878242 1536 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\": not found" containerID="e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9" Feb 12 19:48:55.880687 kubelet[1536]: I0212 19:48:55.878331 1536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9} err="failed to get container status \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e31076a8dd7305dae4f250433e49076cd3e33707322afb8c924e0420068423e9\": not found" Feb 12 19:48:55.880687 kubelet[1536]: I0212 19:48:55.878367 1536 scope.go:115] "RemoveContainer" containerID="b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735" Feb 12 19:48:55.880687 kubelet[1536]: E0212 19:48:55.880568 1536 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\": not found" containerID="b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735" Feb 12 19:48:55.880687 kubelet[1536]: I0212 19:48:55.880619 1536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735} err="failed to get container status \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\": not found" Feb 12 19:48:55.881217 env[1209]: time="2024-02-12T19:48:55.879290035Z" level=error msg="ContainerStatus for \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a49d151ec226cdf18737b2723b69f45b4ac7e8b5dae890393cbe3336465735\": not found" Feb 12 19:48:55.929991 kubelet[1536]: I0212 19:48:55.929952 1536 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9a70ba3b-4d6c-4dc0-8ec0-27b96792b162 path="/var/lib/kubelet/pods/9a70ba3b-4d6c-4dc0-8ec0-27b96792b162/volumes" Feb 12 19:48:56.174497 systemd[1]: var-lib-kubelet-pods-9a70ba3b\x2d4d6c\x2d4dc0\x2d8ec0\x2d27b96792b162-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8cxg4.mount: Deactivated successfully. Feb 12 19:48:56.174816 systemd[1]: var-lib-kubelet-pods-9a70ba3b\x2d4d6c\x2d4dc0\x2d8ec0\x2d27b96792b162-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:48:56.175312 systemd[1]: var-lib-kubelet-pods-9a70ba3b\x2d4d6c\x2d4dc0\x2d8ec0\x2d27b96792b162-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:48:56.464845 kubelet[1536]: E0212 19:48:56.464284 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:57.465870 kubelet[1536]: E0212 19:48:57.465802 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:58.467354 kubelet[1536]: E0212 19:48:58.467293 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:58.611452 kubelet[1536]: I0212 19:48:58.611374 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:58.611964 kubelet[1536]: E0212 19:48:58.611909 1536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" containerName="mount-cgroup" Feb 12 19:48:58.612230 kubelet[1536]: E0212 19:48:58.612208 1536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" containerName="clean-cilium-state" Feb 12 19:48:58.612383 kubelet[1536]: E0212 19:48:58.612366 1536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" containerName="apply-sysctl-overwrites" Feb 12 19:48:58.612509 kubelet[1536]: E0212 19:48:58.612494 1536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" containerName="mount-bpf-fs" Feb 12 19:48:58.612639 kubelet[1536]: E0212 19:48:58.612626 1536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" containerName="cilium-agent" Feb 12 19:48:58.612832 kubelet[1536]: I0212 19:48:58.612787 1536 memory_manager.go:346] "RemoveStaleState removing state" podUID="9a70ba3b-4d6c-4dc0-8ec0-27b96792b162" containerName="cilium-agent" Feb 12 19:48:58.682121 kubelet[1536]: I0212 19:48:58.682028 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:48:58.770757 kubelet[1536]: I0212 19:48:58.769898 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-ipsec-secrets\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.771421 kubelet[1536]: I0212 19:48:58.771367 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-net\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.771909 kubelet[1536]: I0212 19:48:58.771881 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-kernel\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.772236 kubelet[1536]: I0212 19:48:58.772201 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjmc\" (UniqueName: \"kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-kube-api-access-vwjmc\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.773997 kubelet[1536]: I0212 19:48:58.773945 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-clustermesh-secrets\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774261 kubelet[1536]: I0212 19:48:58.774170 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-run\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774261 kubelet[1536]: I0212 19:48:58.774220 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hostproc\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774261 kubelet[1536]: I0212 19:48:58.774262 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-cgroup\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774490 kubelet[1536]: I0212 19:48:58.774299 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cni-path\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774490 kubelet[1536]: I0212 19:48:58.774387 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-etc-cni-netd\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774490 kubelet[1536]: I0212 19:48:58.774427 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-xtables-lock\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774490 kubelet[1536]: I0212 19:48:58.774462 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-config-path\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774490 kubelet[1536]: I0212 19:48:58.774497 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hubble-tls\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774853 kubelet[1536]: I0212 19:48:58.774527 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-bpf-maps\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.774853 kubelet[1536]: I0212 19:48:58.774562 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-lib-modules\") pod \"cilium-k8zw6\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " pod="kube-system/cilium-k8zw6" Feb 12 19:48:58.880694 kubelet[1536]: I0212 19:48:58.875578 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4efdcf3f-69bf-4c0d-beb5-9d041d80d70e-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-nr7sz\" (UID: \"4efdcf3f-69bf-4c0d-beb5-9d041d80d70e\") " pod="kube-system/cilium-operator-f59cbd8c6-nr7sz" Feb 12 19:48:58.880694 kubelet[1536]: I0212 19:48:58.875772 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxg6x\" (UniqueName: \"kubernetes.io/projected/4efdcf3f-69bf-4c0d-beb5-9d041d80d70e-kube-api-access-bxg6x\") pod \"cilium-operator-f59cbd8c6-nr7sz\" (UID: \"4efdcf3f-69bf-4c0d-beb5-9d041d80d70e\") " pod="kube-system/cilium-operator-f59cbd8c6-nr7sz" Feb 12 19:48:59.222484 kubelet[1536]: E0212 19:48:59.222409 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:59.224005 env[1209]: time="2024-02-12T19:48:59.223353443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8zw6,Uid:68a0fb70-eaee-478b-9ee0-3de89d4e9489,Namespace:kube-system,Attempt:0,}" Feb 12 19:48:59.268297 env[1209]: time="2024-02-12T19:48:59.268165096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:59.268297 env[1209]: time="2024-02-12T19:48:59.268228087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:59.268669 env[1209]: time="2024-02-12T19:48:59.268252851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:59.269175 env[1209]: time="2024-02-12T19:48:59.268651462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48 pid=3170 runtime=io.containerd.runc.v2 Feb 12 19:48:59.296762 kubelet[1536]: E0212 19:48:59.296698 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:59.298119 env[1209]: time="2024-02-12T19:48:59.298037873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-nr7sz,Uid:4efdcf3f-69bf-4c0d-beb5-9d041d80d70e,Namespace:kube-system,Attempt:0,}" Feb 12 19:48:59.409888 env[1209]: time="2024-02-12T19:48:59.409713595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:48:59.409888 env[1209]: time="2024-02-12T19:48:59.409866680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:48:59.410274 env[1209]: time="2024-02-12T19:48:59.409922181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:48:59.410274 env[1209]: time="2024-02-12T19:48:59.410236686Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8be0090e97628c9f9d0124d2f3940a281ac4cfe87bfcf92b4a73bdc30385e4a9 pid=3211 runtime=io.containerd.runc.v2 Feb 12 19:48:59.416538 env[1209]: time="2024-02-12T19:48:59.416462928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8zw6,Uid:68a0fb70-eaee-478b-9ee0-3de89d4e9489,Namespace:kube-system,Attempt:0,} returns sandbox id \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\"" Feb 12 19:48:59.417626 kubelet[1536]: E0212 19:48:59.417560 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:59.422312 env[1209]: time="2024-02-12T19:48:59.422219944Z" level=info msg="CreateContainer within sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:48:59.450277 env[1209]: time="2024-02-12T19:48:59.448447479Z" level=info msg="CreateContainer within sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\"" Feb 12 19:48:59.453441 env[1209]: time="2024-02-12T19:48:59.453378616Z" level=info msg="StartContainer for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\"" Feb 12 19:48:59.468315 kubelet[1536]: E0212 19:48:59.468256 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:48:59.646420 kubelet[1536]: E0212 19:48:59.644584 1536 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:48:59.646702 env[1209]: time="2024-02-12T19:48:59.643941338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-nr7sz,Uid:4efdcf3f-69bf-4c0d-beb5-9d041d80d70e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be0090e97628c9f9d0124d2f3940a281ac4cfe87bfcf92b4a73bdc30385e4a9\"" Feb 12 19:48:59.649891 kubelet[1536]: E0212 19:48:59.649826 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:48:59.657591 env[1209]: time="2024-02-12T19:48:59.657002804Z" level=info msg="StartContainer for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\" returns successfully" Feb 12 19:48:59.663984 env[1209]: time="2024-02-12T19:48:59.663897818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:48:59.769373 env[1209]: time="2024-02-12T19:48:59.768620297Z" level=info msg="shim disconnected" id=ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58 Feb 12 19:48:59.769373 env[1209]: time="2024-02-12T19:48:59.768720724Z" level=warning msg="cleaning up after shim disconnected" id=ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58 namespace=k8s.io Feb 12 19:48:59.769373 env[1209]: time="2024-02-12T19:48:59.768738406Z" level=info msg="cleaning up dead shim" Feb 12 19:48:59.808803 env[1209]: time="2024-02-12T19:48:59.808677507Z" level=info msg="StopContainer for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\" with timeout 1 (s)" Feb 12 19:48:59.809165 env[1209]: time="2024-02-12T19:48:59.808909522Z" level=info msg="StopContainer for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\" returns successfully" Feb 12 19:48:59.809264 env[1209]: time="2024-02-12T19:48:59.809215707Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:48:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3296 runtime=io.containerd.runc.v2\n" Feb 12 19:48:59.813060 env[1209]: time="2024-02-12T19:48:59.812956537Z" level=info msg="StopPodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\"" Feb 12 19:48:59.822137 env[1209]: time="2024-02-12T19:48:59.821715151Z" level=info msg="Container to stop \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:48:59.911954 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48-shm.mount: Deactivated successfully. Feb 12 19:48:59.968715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48-rootfs.mount: Deactivated successfully. Feb 12 19:49:00.009067 env[1209]: time="2024-02-12T19:49:00.008301616Z" level=info msg="shim disconnected" id=24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48 Feb 12 19:49:00.009067 env[1209]: time="2024-02-12T19:49:00.008363717Z" level=warning msg="cleaning up after shim disconnected" id=24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48 namespace=k8s.io Feb 12 19:49:00.009067 env[1209]: time="2024-02-12T19:49:00.008377934Z" level=info msg="cleaning up dead shim" Feb 12 19:49:00.086165 env[1209]: time="2024-02-12T19:49:00.086076049Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3329 runtime=io.containerd.runc.v2\n" Feb 12 19:49:00.087133 env[1209]: time="2024-02-12T19:49:00.087062369Z" level=info msg="TearDown network for sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" successfully" Feb 12 19:49:00.087390 env[1209]: time="2024-02-12T19:49:00.087355244Z" level=info msg="StopPodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" returns successfully" Feb 12 19:49:00.211725 kubelet[1536]: I0212 19:49:00.210500 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cni-path\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.211725 kubelet[1536]: I0212 19:49:00.210705 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-kernel\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.211725 kubelet[1536]: I0212 19:49:00.210753 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-clustermesh-secrets\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.211725 kubelet[1536]: I0212 19:49:00.210785 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-run\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.211725 kubelet[1536]: I0212 19:49:00.210813 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-xtables-lock\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.211725 kubelet[1536]: I0212 19:49:00.210846 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-config-path\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212373 kubelet[1536]: I0212 19:49:00.210877 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-ipsec-secrets\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212373 kubelet[1536]: I0212 19:49:00.210903 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-net\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212373 kubelet[1536]: I0212 19:49:00.210931 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-lib-modules\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212373 kubelet[1536]: I0212 19:49:00.210963 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-cgroup\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212373 kubelet[1536]: I0212 19:49:00.210992 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-etc-cni-netd\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212373 kubelet[1536]: I0212 19:49:00.211022 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hubble-tls\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212800 kubelet[1536]: I0212 19:49:00.212453 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-bpf-maps\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212800 kubelet[1536]: I0212 19:49:00.212602 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwjmc\" (UniqueName: \"kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-kube-api-access-vwjmc\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212800 kubelet[1536]: I0212 19:49:00.212664 1536 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hostproc\") pod \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\" (UID: \"68a0fb70-eaee-478b-9ee0-3de89d4e9489\") " Feb 12 19:49:00.212800 kubelet[1536]: I0212 19:49:00.212769 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hostproc" (OuterVolumeSpecName: "hostproc") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.213077 kubelet[1536]: I0212 19:49:00.211963 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cni-path" (OuterVolumeSpecName: "cni-path") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.213077 kubelet[1536]: I0212 19:49:00.211996 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.213077 kubelet[1536]: I0212 19:49:00.212854 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.222464 kubelet[1536]: I0212 19:49:00.213541 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.222464 kubelet[1536]: I0212 19:49:00.213630 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.222464 kubelet[1536]: W0212 19:49:00.213895 1536 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/68a0fb70-eaee-478b-9ee0-3de89d4e9489/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 19:49:00.222464 kubelet[1536]: I0212 19:49:00.213927 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.222464 kubelet[1536]: I0212 19:49:00.213980 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.223006 kubelet[1536]: I0212 19:49:00.214015 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.223006 kubelet[1536]: I0212 19:49:00.215136 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:00.226065 systemd[1]: var-lib-kubelet-pods-68a0fb70\x2deaee\x2d478b\x2d9ee0\x2d3de89d4e9489-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:00.230639 kubelet[1536]: I0212 19:49:00.230516 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:00.231087 kubelet[1536]: I0212 19:49:00.231047 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:00.239558 kubelet[1536]: I0212 19:49:00.239495 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:00.242395 systemd[1]: var-lib-kubelet-pods-68a0fb70\x2deaee\x2d478b\x2d9ee0\x2d3de89d4e9489-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:49:00.251003 systemd[1]: var-lib-kubelet-pods-68a0fb70\x2deaee\x2d478b\x2d9ee0\x2d3de89d4e9489-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:00.253940 kubelet[1536]: I0212 19:49:00.253883 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-kube-api-access-vwjmc" (OuterVolumeSpecName: "kube-api-access-vwjmc") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "kube-api-access-vwjmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:00.254890 kubelet[1536]: I0212 19:49:00.254805 1536 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "68a0fb70-eaee-478b-9ee0-3de89d4e9489" (UID: "68a0fb70-eaee-478b-9ee0-3de89d4e9489"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313498 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-ipsec-secrets\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313572 1536 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-net\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313590 1536 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-lib-modules\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313604 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-cgroup\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313619 1536 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-etc-cni-netd\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313634 1536 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hubble-tls\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313657 1536 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vwjmc\" (UniqueName: \"kubernetes.io/projected/68a0fb70-eaee-478b-9ee0-3de89d4e9489-kube-api-access-vwjmc\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.313679 kubelet[1536]: I0212 19:49:00.313672 1536 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-hostproc\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313705 1536 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-bpf-maps\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313722 1536 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-host-proc-sys-kernel\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313738 1536 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68a0fb70-eaee-478b-9ee0-3de89d4e9489-clustermesh-secrets\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313768 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-run\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313782 1536 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cni-path\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313797 1536 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68a0fb70-eaee-478b-9ee0-3de89d4e9489-xtables-lock\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.314428 kubelet[1536]: I0212 19:49:00.313812 1536 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68a0fb70-eaee-478b-9ee0-3de89d4e9489-cilium-config-path\") on node \"64.23.171.188\" DevicePath \"\"" Feb 12 19:49:00.470301 kubelet[1536]: E0212 19:49:00.469395 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:00.820123 kubelet[1536]: I0212 19:49:00.820084 1536 scope.go:115] "RemoveContainer" containerID="ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58" Feb 12 19:49:00.827357 env[1209]: time="2024-02-12T19:49:00.827275853Z" level=info msg="RemoveContainer for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\"" Feb 12 19:49:00.841448 env[1209]: time="2024-02-12T19:49:00.841379951Z" level=info msg="RemoveContainer for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\" returns successfully" Feb 12 19:49:00.842708 kubelet[1536]: I0212 19:49:00.842056 1536 scope.go:115] "RemoveContainer" containerID="ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58" Feb 12 19:49:00.842906 env[1209]: time="2024-02-12T19:49:00.842754286Z" level=error msg="ContainerStatus for \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\": not found" Feb 12 19:49:00.843184 kubelet[1536]: E0212 19:49:00.843143 1536 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\": not found" containerID="ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58" Feb 12 19:49:00.843290 kubelet[1536]: I0212 19:49:00.843204 1536 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58} err="failed to get container status \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce2bf8bd408c150b41c77173156f8b9d19023051ea1cf45330b8f1d89fd0bf58\": not found" Feb 12 19:49:00.907703 systemd[1]: var-lib-kubelet-pods-68a0fb70\x2deaee\x2d478b\x2d9ee0\x2d3de89d4e9489-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvwjmc.mount: Deactivated successfully. Feb 12 19:49:00.970075 kubelet[1536]: I0212 19:49:00.963282 1536 topology_manager.go:210] "Topology Admit Handler" Feb 12 19:49:00.970075 kubelet[1536]: E0212 19:49:00.963378 1536 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68a0fb70-eaee-478b-9ee0-3de89d4e9489" containerName="mount-cgroup" Feb 12 19:49:00.970075 kubelet[1536]: I0212 19:49:00.963424 1536 memory_manager.go:346] "RemoveStaleState removing state" podUID="68a0fb70-eaee-478b-9ee0-3de89d4e9489" containerName="mount-cgroup" Feb 12 19:49:01.035920 kubelet[1536]: I0212 19:49:01.034716 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46def2bd-ffe1-4216-8dfd-52b433650f1f-cilium-config-path\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.035920 kubelet[1536]: I0212 19:49:01.034789 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/46def2bd-ffe1-4216-8dfd-52b433650f1f-cilium-ipsec-secrets\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.035920 kubelet[1536]: I0212 19:49:01.034839 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-host-proc-sys-kernel\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.035920 kubelet[1536]: I0212 19:49:01.034888 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcjh\" (UniqueName: \"kubernetes.io/projected/46def2bd-ffe1-4216-8dfd-52b433650f1f-kube-api-access-cmcjh\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.035920 kubelet[1536]: I0212 19:49:01.035020 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-hostproc\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036467 kubelet[1536]: I0212 19:49:01.035082 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-cni-path\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036467 kubelet[1536]: I0212 19:49:01.035124 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-cilium-cgroup\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036467 kubelet[1536]: I0212 19:49:01.035164 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-lib-modules\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036467 kubelet[1536]: I0212 19:49:01.035207 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-host-proc-sys-net\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036467 kubelet[1536]: I0212 19:49:01.035257 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-etc-cni-netd\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036467 kubelet[1536]: I0212 19:49:01.035296 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-xtables-lock\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036790 kubelet[1536]: I0212 19:49:01.035331 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46def2bd-ffe1-4216-8dfd-52b433650f1f-clustermesh-secrets\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036790 kubelet[1536]: I0212 19:49:01.035368 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46def2bd-ffe1-4216-8dfd-52b433650f1f-hubble-tls\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036790 kubelet[1536]: I0212 19:49:01.035406 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-cilium-run\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.036790 kubelet[1536]: I0212 19:49:01.035454 1536 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46def2bd-ffe1-4216-8dfd-52b433650f1f-bpf-maps\") pod \"cilium-8jndm\" (UID: \"46def2bd-ffe1-4216-8dfd-52b433650f1f\") " pod="kube-system/cilium-8jndm" Feb 12 19:49:01.275388 kubelet[1536]: E0212 19:49:01.272628 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:01.276183 env[1209]: time="2024-02-12T19:49:01.276131258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jndm,Uid:46def2bd-ffe1-4216-8dfd-52b433650f1f,Namespace:kube-system,Attempt:0,}" Feb 12 19:49:01.479563 kubelet[1536]: E0212 19:49:01.479375 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:01.534382 env[1209]: time="2024-02-12T19:49:01.532820035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:49:01.534382 env[1209]: time="2024-02-12T19:49:01.532887764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:49:01.534382 env[1209]: time="2024-02-12T19:49:01.532906281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:49:01.549997 env[1209]: time="2024-02-12T19:49:01.546311388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e pid=3361 runtime=io.containerd.runc.v2 Feb 12 19:49:01.683885 env[1209]: time="2024-02-12T19:49:01.683817527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jndm,Uid:46def2bd-ffe1-4216-8dfd-52b433650f1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\"" Feb 12 19:49:01.685580 kubelet[1536]: E0212 19:49:01.685540 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:01.691040 env[1209]: time="2024-02-12T19:49:01.690939253Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:49:01.802655 env[1209]: time="2024-02-12T19:49:01.802440671Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6f80dc6aa5281cebd58d3827c3b01e85500a9bef3cc9c6f074a56de6deec0f7\"" Feb 12 19:49:01.804001 env[1209]: time="2024-02-12T19:49:01.803923076Z" level=info msg="StartContainer for \"c6f80dc6aa5281cebd58d3827c3b01e85500a9bef3cc9c6f074a56de6deec0f7\"" Feb 12 19:49:01.922822 env[1209]: time="2024-02-12T19:49:01.922726176Z" level=info msg="StopPodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\"" Feb 12 19:49:01.923502 env[1209]: time="2024-02-12T19:49:01.922886870Z" level=info msg="TearDown network for sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" successfully" Feb 12 19:49:01.923502 env[1209]: time="2024-02-12T19:49:01.922947345Z" level=info msg="StopPodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" returns successfully" Feb 12 19:49:01.926471 kubelet[1536]: I0212 19:49:01.926431 1536 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=68a0fb70-eaee-478b-9ee0-3de89d4e9489 path="/var/lib/kubelet/pods/68a0fb70-eaee-478b-9ee0-3de89d4e9489/volumes" Feb 12 19:49:01.976659 env[1209]: time="2024-02-12T19:49:01.976556752Z" level=info msg="StartContainer for \"c6f80dc6aa5281cebd58d3827c3b01e85500a9bef3cc9c6f074a56de6deec0f7\" returns successfully" Feb 12 19:49:02.059171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6f80dc6aa5281cebd58d3827c3b01e85500a9bef3cc9c6f074a56de6deec0f7-rootfs.mount: Deactivated successfully. Feb 12 19:49:02.184702 env[1209]: time="2024-02-12T19:49:02.184091125Z" level=info msg="shim disconnected" id=c6f80dc6aa5281cebd58d3827c3b01e85500a9bef3cc9c6f074a56de6deec0f7 Feb 12 19:49:02.184702 env[1209]: time="2024-02-12T19:49:02.184158968Z" level=warning msg="cleaning up after shim disconnected" id=c6f80dc6aa5281cebd58d3827c3b01e85500a9bef3cc9c6f074a56de6deec0f7 namespace=k8s.io Feb 12 19:49:02.184702 env[1209]: time="2024-02-12T19:49:02.184178161Z" level=info msg="cleaning up dead shim" Feb 12 19:49:02.239213 env[1209]: time="2024-02-12T19:49:02.239039612Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3449 runtime=io.containerd.runc.v2\n" Feb 12 19:49:02.480943 kubelet[1536]: E0212 19:49:02.480481 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:02.838150 kubelet[1536]: E0212 19:49:02.837183 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:02.898867 env[1209]: time="2024-02-12T19:49:02.898810169Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:49:02.994136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3183723127.mount: Deactivated successfully. Feb 12 19:49:03.040832 env[1209]: time="2024-02-12T19:49:03.040757066Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47bbddf2ef5b3f456b486c8fe2af5f8096635f3a552e010d002c05acd580dd2e\"" Feb 12 19:49:03.047207 env[1209]: time="2024-02-12T19:49:03.047099909Z" level=info msg="StartContainer for \"47bbddf2ef5b3f456b486c8fe2af5f8096635f3a552e010d002c05acd580dd2e\"" Feb 12 19:49:03.386136 env[1209]: time="2024-02-12T19:49:03.383903425Z" level=info msg="StartContainer for \"47bbddf2ef5b3f456b486c8fe2af5f8096635f3a552e010d002c05acd580dd2e\" returns successfully" Feb 12 19:49:03.481802 kubelet[1536]: E0212 19:49:03.481660 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:03.578802 env[1209]: time="2024-02-12T19:49:03.577892673Z" level=info msg="shim disconnected" id=47bbddf2ef5b3f456b486c8fe2af5f8096635f3a552e010d002c05acd580dd2e Feb 12 19:49:03.578802 env[1209]: time="2024-02-12T19:49:03.578006546Z" level=warning msg="cleaning up after shim disconnected" id=47bbddf2ef5b3f456b486c8fe2af5f8096635f3a552e010d002c05acd580dd2e namespace=k8s.io Feb 12 19:49:03.578802 env[1209]: time="2024-02-12T19:49:03.578023892Z" level=info msg="cleaning up dead shim" Feb 12 19:49:03.636508 env[1209]: time="2024-02-12T19:49:03.635294634Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3513 runtime=io.containerd.runc.v2\n" Feb 12 19:49:03.665286 env[1209]: time="2024-02-12T19:49:03.665149232Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:49:03.690799 env[1209]: time="2024-02-12T19:49:03.690730418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:49:03.696080 env[1209]: time="2024-02-12T19:49:03.695991419Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:49:03.696841 env[1209]: time="2024-02-12T19:49:03.696795910Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:49:03.702147 env[1209]: time="2024-02-12T19:49:03.700880052Z" level=info msg="CreateContainer within sandbox \"8be0090e97628c9f9d0124d2f3940a281ac4cfe87bfcf92b4a73bdc30385e4a9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:49:03.742092 env[1209]: time="2024-02-12T19:49:03.741917573Z" level=info msg="CreateContainer within sandbox \"8be0090e97628c9f9d0124d2f3940a281ac4cfe87bfcf92b4a73bdc30385e4a9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"77783f4d661ea829e32ea18f332689ba2e43ed5a997d7fe6848f336d9073e1f1\"" Feb 12 19:49:03.746202 env[1209]: time="2024-02-12T19:49:03.746140868Z" level=info msg="StartContainer for \"77783f4d661ea829e32ea18f332689ba2e43ed5a997d7fe6848f336d9073e1f1\"" Feb 12 19:49:03.861237 kubelet[1536]: E0212 19:49:03.861136 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:03.867241 env[1209]: time="2024-02-12T19:49:03.867175480Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:49:03.990490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47bbddf2ef5b3f456b486c8fe2af5f8096635f3a552e010d002c05acd580dd2e-rootfs.mount: Deactivated successfully. Feb 12 19:49:04.035344 env[1209]: time="2024-02-12T19:49:04.034744274Z" level=info msg="StartContainer for \"77783f4d661ea829e32ea18f332689ba2e43ed5a997d7fe6848f336d9073e1f1\" returns successfully" Feb 12 19:49:04.037216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376752845.mount: Deactivated successfully. Feb 12 19:49:04.085360 env[1209]: time="2024-02-12T19:49:04.085267849Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a90afd51798d82b0c369aa9c67727e757c66ed565741d1b69b86163ddd6ca2e7\"" Feb 12 19:49:04.095743 env[1209]: time="2024-02-12T19:49:04.095677968Z" level=info msg="StartContainer for \"a90afd51798d82b0c369aa9c67727e757c66ed565741d1b69b86163ddd6ca2e7\"" Feb 12 19:49:04.292015 env[1209]: time="2024-02-12T19:49:04.291928515Z" level=info msg="StartContainer for \"a90afd51798d82b0c369aa9c67727e757c66ed565741d1b69b86163ddd6ca2e7\" returns successfully" Feb 12 19:49:04.334660 kubelet[1536]: I0212 19:49:04.333360 1536 setters.go:548] "Node became not ready" node="64.23.171.188" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 19:49:04.333308371 +0000 UTC m=+116.709723513 LastTransitionTime:2024-02-12 19:49:04.333308371 +0000 UTC m=+116.709723513 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 19:49:04.426035 env[1209]: time="2024-02-12T19:49:04.425957327Z" level=info msg="shim disconnected" id=a90afd51798d82b0c369aa9c67727e757c66ed565741d1b69b86163ddd6ca2e7 Feb 12 19:49:04.426757 env[1209]: time="2024-02-12T19:49:04.426707460Z" level=warning msg="cleaning up after shim disconnected" id=a90afd51798d82b0c369aa9c67727e757c66ed565741d1b69b86163ddd6ca2e7 namespace=k8s.io Feb 12 19:49:04.426975 env[1209]: time="2024-02-12T19:49:04.426950229Z" level=info msg="cleaning up dead shim" Feb 12 19:49:04.482832 kubelet[1536]: E0212 19:49:04.482747 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:04.484689 env[1209]: time="2024-02-12T19:49:04.484631556Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3610 runtime=io.containerd.runc.v2\n" Feb 12 19:49:04.646846 kubelet[1536]: E0212 19:49:04.646653 1536 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:49:04.864390 kubelet[1536]: E0212 19:49:04.864335 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:04.884232 kubelet[1536]: E0212 19:49:04.880395 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:04.886163 env[1209]: time="2024-02-12T19:49:04.885714001Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:49:04.893391 kubelet[1536]: I0212 19:49:04.893330 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-nr7sz" podStartSLOduration=-9.223372029961504e+09 pod.CreationTimestamp="2024-02-12 19:48:58 +0000 UTC" firstStartedPulling="2024-02-12 19:48:59.65494531 +0000 UTC m=+112.031360439" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:49:04.892232418 +0000 UTC m=+117.268647607" watchObservedRunningTime="2024-02-12 19:49:04.893271826 +0000 UTC m=+117.269686977" Feb 12 19:49:04.931972 kubelet[1536]: E0212 19:49:04.931112 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:04.934323 env[1209]: time="2024-02-12T19:49:04.934250454Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1054415279152ff1f09e57126b45fbff66c7cac3efc7c7b2baec2b705d840fa\"" Feb 12 19:49:04.962771 env[1209]: time="2024-02-12T19:49:04.950346591Z" level=info msg="StartContainer for \"a1054415279152ff1f09e57126b45fbff66c7cac3efc7c7b2baec2b705d840fa\"" Feb 12 19:49:04.981408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a90afd51798d82b0c369aa9c67727e757c66ed565741d1b69b86163ddd6ca2e7-rootfs.mount: Deactivated successfully. Feb 12 19:49:05.124850 env[1209]: time="2024-02-12T19:49:05.124783398Z" level=info msg="StartContainer for \"a1054415279152ff1f09e57126b45fbff66c7cac3efc7c7b2baec2b705d840fa\" returns successfully" Feb 12 19:49:05.165368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1054415279152ff1f09e57126b45fbff66c7cac3efc7c7b2baec2b705d840fa-rootfs.mount: Deactivated successfully. Feb 12 19:49:05.226957 env[1209]: time="2024-02-12T19:49:05.219750379Z" level=info msg="shim disconnected" id=a1054415279152ff1f09e57126b45fbff66c7cac3efc7c7b2baec2b705d840fa Feb 12 19:49:05.226957 env[1209]: time="2024-02-12T19:49:05.219838697Z" level=warning msg="cleaning up after shim disconnected" id=a1054415279152ff1f09e57126b45fbff66c7cac3efc7c7b2baec2b705d840fa namespace=k8s.io Feb 12 19:49:05.226957 env[1209]: time="2024-02-12T19:49:05.219856973Z" level=info msg="cleaning up dead shim" Feb 12 19:49:05.263757 env[1209]: time="2024-02-12T19:49:05.263691869Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3668 runtime=io.containerd.runc.v2\n" Feb 12 19:49:05.484214 kubelet[1536]: E0212 19:49:05.483980 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:05.900735 kubelet[1536]: E0212 19:49:05.900697 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:05.902801 kubelet[1536]: E0212 19:49:05.902761 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:05.917482 env[1209]: time="2024-02-12T19:49:05.917400301Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:49:05.996075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756187343.mount: Deactivated successfully. Feb 12 19:49:06.031878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247638570.mount: Deactivated successfully. Feb 12 19:49:06.061985 env[1209]: time="2024-02-12T19:49:06.061903975Z" level=info msg="CreateContainer within sandbox \"f506634b39658197812357dab90421eac8fc74bd4fbfd016fcf29068b9474f7e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4be071b3860cdc045234e430478142d88db8cefadb21c274b82020b6e481d75b\"" Feb 12 19:49:06.065929 env[1209]: time="2024-02-12T19:49:06.063599509Z" level=info msg="StartContainer for \"4be071b3860cdc045234e430478142d88db8cefadb21c274b82020b6e481d75b\"" Feb 12 19:49:06.262603 env[1209]: time="2024-02-12T19:49:06.262298050Z" level=info msg="StartContainer for \"4be071b3860cdc045234e430478142d88db8cefadb21c274b82020b6e481d75b\" returns successfully" Feb 12 19:49:06.484633 kubelet[1536]: E0212 19:49:06.484481 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:06.916293 kubelet[1536]: E0212 19:49:06.915769 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:06.960968 kubelet[1536]: I0212 19:49:06.960796 1536 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8jndm" podStartSLOduration=6.960720855 pod.CreationTimestamp="2024-02-12 19:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:49:06.955676142 +0000 UTC m=+119.332091299" watchObservedRunningTime="2024-02-12 19:49:06.960720855 +0000 UTC m=+119.337135992" Feb 12 19:49:07.285136 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:49:07.485337 kubelet[1536]: E0212 19:49:07.484790 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:07.932884 kubelet[1536]: E0212 19:49:07.929910 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:07.983739 systemd[1]: run-containerd-runc-k8s.io-4be071b3860cdc045234e430478142d88db8cefadb21c274b82020b6e481d75b-runc.vboMQt.mount: Deactivated successfully. Feb 12 19:49:08.485558 kubelet[1536]: E0212 19:49:08.485473 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:08.949688 kubelet[1536]: E0212 19:49:08.944899 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:09.195491 kubelet[1536]: E0212 19:49:09.195008 1536 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:09.241787 env[1209]: time="2024-02-12T19:49:09.241594912Z" level=info msg="StopPodSandbox for \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\"" Feb 12 19:49:09.242924 env[1209]: time="2024-02-12T19:49:09.242826755Z" level=info msg="TearDown network for sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" successfully" Feb 12 19:49:09.243487 env[1209]: time="2024-02-12T19:49:09.243439783Z" level=info msg="StopPodSandbox for \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" returns successfully" Feb 12 19:49:09.244720 env[1209]: time="2024-02-12T19:49:09.244672187Z" level=info msg="RemovePodSandbox for \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\"" Feb 12 19:49:09.245102 env[1209]: time="2024-02-12T19:49:09.244964313Z" level=info msg="Forcibly stopping sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\"" Feb 12 19:49:09.245388 env[1209]: time="2024-02-12T19:49:09.245351635Z" level=info msg="TearDown network for sandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" successfully" Feb 12 19:49:09.259959 env[1209]: time="2024-02-12T19:49:09.259889577Z" level=info msg="RemovePodSandbox \"b3cb088eddcb92e3477f9b0128ea0a5341959ef3a86ca5fe48d8f679a63108ed\" returns successfully" Feb 12 19:49:09.261469 env[1209]: time="2024-02-12T19:49:09.261396232Z" level=info msg="StopPodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\"" Feb 12 19:49:09.261906 env[1209]: time="2024-02-12T19:49:09.261826418Z" level=info msg="TearDown network for sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" successfully" Feb 12 19:49:09.262109 env[1209]: time="2024-02-12T19:49:09.262075223Z" level=info msg="StopPodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" returns successfully" Feb 12 19:49:09.269255 env[1209]: time="2024-02-12T19:49:09.269174259Z" level=info msg="RemovePodSandbox for \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\"" Feb 12 19:49:09.269661 env[1209]: time="2024-02-12T19:49:09.269575055Z" level=info msg="Forcibly stopping sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\"" Feb 12 19:49:09.269994 env[1209]: time="2024-02-12T19:49:09.269952144Z" level=info msg="TearDown network for sandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" successfully" Feb 12 19:49:09.298741 env[1209]: time="2024-02-12T19:49:09.298648009Z" level=info msg="RemovePodSandbox \"24320f0a88423614fc2a65a41adc4cf0f67a5c8b0af2eb3284870401c85f4d48\" returns successfully" Feb 12 19:49:09.486674 kubelet[1536]: E0212 19:49:09.486012 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:10.392182 systemd[1]: run-containerd-runc-k8s.io-4be071b3860cdc045234e430478142d88db8cefadb21c274b82020b6e481d75b-runc.kUXjCb.mount: Deactivated successfully. Feb 12 19:49:10.494739 kubelet[1536]: E0212 19:49:10.487834 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:11.488322 kubelet[1536]: E0212 19:49:11.488260 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:12.496313 kubelet[1536]: E0212 19:49:12.496227 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:12.727860 systemd-networkd[1085]: lxc_health: Link UP Feb 12 19:49:12.767952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:49:12.766944 systemd-networkd[1085]: lxc_health: Gained carrier Feb 12 19:49:12.853172 systemd[1]: run-containerd-runc-k8s.io-4be071b3860cdc045234e430478142d88db8cefadb21c274b82020b6e481d75b-runc.d6OGMg.mount: Deactivated successfully. Feb 12 19:49:13.277396 kubelet[1536]: E0212 19:49:13.277341 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:13.497155 kubelet[1536]: E0212 19:49:13.497095 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:13.968953 kubelet[1536]: E0212 19:49:13.968902 1536 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 12 19:49:14.498304 kubelet[1536]: E0212 19:49:14.498252 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:14.795395 systemd-networkd[1085]: lxc_health: Gained IPv6LL Feb 12 19:49:15.499908 kubelet[1536]: E0212 19:49:15.499825 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:16.501541 kubelet[1536]: E0212 19:49:16.501461 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:17.502833 kubelet[1536]: E0212 19:49:17.502695 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:18.503202 kubelet[1536]: E0212 19:49:18.503139 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:19.504605 kubelet[1536]: E0212 19:49:19.504516 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:20.507323 kubelet[1536]: E0212 19:49:20.506681 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 19:49:21.508917 kubelet[1536]: E0212 19:49:21.508855 1536 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"