Feb 12 21:59:06.237166 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 21:59:06.237202 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:59:06.237218 kernel: BIOS-provided physical RAM map: Feb 12 21:59:06.237229 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 21:59:06.237240 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 21:59:06.237252 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 21:59:06.237268 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 12 21:59:06.237281 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 12 21:59:06.237293 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 12 21:59:06.237305 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 21:59:06.237316 kernel: NX (Execute Disable) protection: active Feb 12 21:59:06.237327 kernel: SMBIOS 2.7 present. Feb 12 21:59:06.237339 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 12 21:59:06.237351 kernel: Hypervisor detected: KVM Feb 12 21:59:06.237369 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 21:59:06.237382 kernel: kvm-clock: cpu 0, msr efaa001, primary cpu clock Feb 12 21:59:06.237396 kernel: kvm-clock: using sched offset of 7716182945 cycles Feb 12 21:59:06.237409 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 21:59:06.237422 kernel: tsc: Detected 2499.996 MHz processor Feb 12 21:59:06.237449 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 21:59:06.237465 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 21:59:06.237479 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 12 21:59:06.237492 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 21:59:06.237505 kernel: Using GB pages for direct mapping Feb 12 21:59:06.237517 kernel: ACPI: Early table checksum verification disabled Feb 12 21:59:06.237530 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 12 21:59:06.237544 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 12 21:59:06.237557 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 21:59:06.237571 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 12 21:59:06.237587 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 12 21:59:06.237600 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 12 21:59:06.237613 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 21:59:06.237626 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 12 21:59:06.237638 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 21:59:06.237652 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 12 21:59:06.237665 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 12 21:59:06.237678 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 12 21:59:06.237694 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 12 21:59:06.237708 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 12 21:59:06.237722 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 12 21:59:06.237741 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 12 21:59:06.237757 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 12 21:59:06.237772 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 12 21:59:06.237787 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 12 21:59:06.237805 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 12 21:59:06.237819 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 12 21:59:06.237835 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 12 21:59:06.237850 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 21:59:06.237864 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 21:59:06.237879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 12 21:59:06.237893 kernel: NUMA: Initialized distance table, cnt=1 Feb 12 21:59:06.237907 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 12 21:59:06.237925 kernel: Zone ranges: Feb 12 21:59:06.237939 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 21:59:06.237953 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 12 21:59:06.237967 kernel: Normal empty Feb 12 21:59:06.237981 kernel: Movable zone start for each node Feb 12 21:59:06.237995 kernel: Early memory node ranges Feb 12 21:59:06.238009 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 21:59:06.238023 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 12 21:59:06.238037 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 12 21:59:06.238054 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 21:59:06.238068 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 21:59:06.238083 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 12 21:59:06.238097 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 21:59:06.238111 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 21:59:06.238126 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 12 21:59:06.238140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 21:59:06.238154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 21:59:06.238168 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 21:59:06.238186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 21:59:06.238200 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 21:59:06.238214 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 21:59:06.238228 kernel: TSC deadline timer available Feb 12 21:59:06.238242 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 21:59:06.238256 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 12 21:59:06.238270 kernel: Booting paravirtualized kernel on KVM Feb 12 21:59:06.238285 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 21:59:06.238299 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 21:59:06.238319 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 21:59:06.238332 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 21:59:06.238345 kernel: pcpu-alloc: [0] 0 1 Feb 12 21:59:06.238360 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 12 21:59:06.238376 kernel: kvm-guest: PV spinlocks enabled Feb 12 21:59:06.238390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 21:59:06.238405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 12 21:59:06.238419 kernel: Policy zone: DMA32 Feb 12 21:59:06.253503 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:59:06.253531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 21:59:06.253545 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 21:59:06.253560 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 21:59:06.253574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 21:59:06.253590 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 12 21:59:06.253605 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 21:59:06.253619 kernel: Kernel/User page tables isolation: enabled Feb 12 21:59:06.253634 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 21:59:06.253738 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 21:59:06.253754 kernel: rcu: Hierarchical RCU implementation. Feb 12 21:59:06.253769 kernel: rcu: RCU event tracing is enabled. Feb 12 21:59:06.253784 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 21:59:06.253799 kernel: Rude variant of Tasks RCU enabled. Feb 12 21:59:06.253813 kernel: Tracing variant of Tasks RCU enabled. Feb 12 21:59:06.253828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 21:59:06.253842 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 21:59:06.253856 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 21:59:06.253872 kernel: random: crng init done Feb 12 21:59:06.253886 kernel: Console: colour VGA+ 80x25 Feb 12 21:59:06.253900 kernel: printk: console [ttyS0] enabled Feb 12 21:59:06.253915 kernel: ACPI: Core revision 20210730 Feb 12 21:59:06.253929 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 12 21:59:06.253943 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 21:59:06.253957 kernel: x2apic enabled Feb 12 21:59:06.253971 kernel: Switched APIC routing to physical x2apic. Feb 12 21:59:06.253986 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 12 21:59:06.254001 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 12 21:59:06.254017 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 21:59:06.254032 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 21:59:06.254046 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 21:59:06.254070 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 21:59:06.254087 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 21:59:06.254101 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 21:59:06.254116 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 21:59:06.254130 kernel: RETBleed: Vulnerable Feb 12 21:59:06.254145 kernel: Speculative Store Bypass: Vulnerable Feb 12 21:59:06.254160 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:59:06.254174 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:59:06.254189 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 21:59:06.254203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 21:59:06.254220 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 21:59:06.254236 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 21:59:06.254250 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 12 21:59:06.254265 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 12 21:59:06.254280 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 21:59:06.254295 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 21:59:06.254312 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 21:59:06.254327 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 12 21:59:06.254341 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 21:59:06.254356 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 12 21:59:06.254372 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 12 21:59:06.254387 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 12 21:59:06.254401 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 12 21:59:06.254416 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 12 21:59:06.254444 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 12 21:59:06.254459 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 12 21:59:06.254520 kernel: Freeing SMP alternatives memory: 32K Feb 12 21:59:06.254540 kernel: pid_max: default: 32768 minimum: 301 Feb 12 21:59:06.254556 kernel: LSM: Security Framework initializing Feb 12 21:59:06.254571 kernel: SELinux: Initializing. Feb 12 21:59:06.254586 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:59:06.254600 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:59:06.254614 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 21:59:06.254629 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 21:59:06.254644 kernel: signal: max sigframe size: 3632 Feb 12 21:59:06.254659 kernel: rcu: Hierarchical SRCU implementation. Feb 12 21:59:06.254674 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 21:59:06.254689 kernel: smp: Bringing up secondary CPUs ... Feb 12 21:59:06.254707 kernel: x86: Booting SMP configuration: Feb 12 21:59:06.254722 kernel: .... node #0, CPUs: #1 Feb 12 21:59:06.254737 kernel: kvm-clock: cpu 1, msr efaa041, secondary cpu clock Feb 12 21:59:06.254752 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 12 21:59:06.254769 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 12 21:59:06.254785 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 21:59:06.254800 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 21:59:06.254815 kernel: smpboot: Max logical packages: 1 Feb 12 21:59:06.254833 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 12 21:59:06.254848 kernel: devtmpfs: initialized Feb 12 21:59:06.254864 kernel: x86/mm: Memory block size: 128MB Feb 12 21:59:06.254880 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 21:59:06.254895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 21:59:06.254911 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 21:59:06.254926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 21:59:06.254941 kernel: audit: initializing netlink subsys (disabled) Feb 12 21:59:06.254957 kernel: audit: type=2000 audit(1707775144.773:1): state=initialized audit_enabled=0 res=1 Feb 12 21:59:06.254974 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 21:59:06.254990 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 21:59:06.255005 kernel: cpuidle: using governor menu Feb 12 21:59:06.255020 kernel: ACPI: bus type PCI registered Feb 12 21:59:06.255035 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 21:59:06.255049 kernel: dca service started, version 1.12.1 Feb 12 21:59:06.255064 kernel: PCI: Using configuration type 1 for base access Feb 12 21:59:06.255079 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 21:59:06.255094 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 21:59:06.255111 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 21:59:06.255126 kernel: ACPI: Added _OSI(Module Device) Feb 12 21:59:06.255140 kernel: ACPI: Added _OSI(Processor Device) Feb 12 21:59:06.255155 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 21:59:06.255170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 21:59:06.255185 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 21:59:06.255200 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 21:59:06.255214 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 21:59:06.255229 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 12 21:59:06.255247 kernel: ACPI: Interpreter enabled Feb 12 21:59:06.255261 kernel: ACPI: PM: (supports S0 S5) Feb 12 21:59:06.255277 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 21:59:06.255292 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 21:59:06.255307 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 12 21:59:06.255322 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 21:59:06.265651 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 21:59:06.265807 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 21:59:06.265833 kernel: acpiphp: Slot [3] registered Feb 12 21:59:06.265848 kernel: acpiphp: Slot [4] registered Feb 12 21:59:06.265863 kernel: acpiphp: Slot [5] registered Feb 12 21:59:06.265878 kernel: acpiphp: Slot [6] registered Feb 12 21:59:06.265893 kernel: acpiphp: Slot [7] registered Feb 12 21:59:06.265907 kernel: acpiphp: Slot [8] registered Feb 12 21:59:06.265921 kernel: acpiphp: Slot [9] registered Feb 12 21:59:06.265935 kernel: acpiphp: Slot [10] registered Feb 12 21:59:06.265950 kernel: acpiphp: Slot [11] registered Feb 12 21:59:06.265967 kernel: acpiphp: Slot [12] registered Feb 12 21:59:06.265982 kernel: acpiphp: Slot [13] registered Feb 12 21:59:06.265996 kernel: acpiphp: Slot [14] registered Feb 12 21:59:06.266010 kernel: acpiphp: Slot [15] registered Feb 12 21:59:06.266025 kernel: acpiphp: Slot [16] registered Feb 12 21:59:06.266039 kernel: acpiphp: Slot [17] registered Feb 12 21:59:06.266054 kernel: acpiphp: Slot [18] registered Feb 12 21:59:06.266068 kernel: acpiphp: Slot [19] registered Feb 12 21:59:06.266082 kernel: acpiphp: Slot [20] registered Feb 12 21:59:06.266099 kernel: acpiphp: Slot [21] registered Feb 12 21:59:06.266114 kernel: acpiphp: Slot [22] registered Feb 12 21:59:06.266128 kernel: acpiphp: Slot [23] registered Feb 12 21:59:06.266142 kernel: acpiphp: Slot [24] registered Feb 12 21:59:06.266156 kernel: acpiphp: Slot [25] registered Feb 12 21:59:06.266170 kernel: acpiphp: Slot [26] registered Feb 12 21:59:06.266185 kernel: acpiphp: Slot [27] registered Feb 12 21:59:06.266199 kernel: acpiphp: Slot [28] registered Feb 12 21:59:06.266214 kernel: acpiphp: Slot [29] registered Feb 12 21:59:06.266228 kernel: acpiphp: Slot [30] registered Feb 12 21:59:06.266244 kernel: acpiphp: Slot [31] registered Feb 12 21:59:06.266259 kernel: PCI host bridge to bus 0000:00 Feb 12 21:59:06.266380 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 21:59:06.266503 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 21:59:06.266614 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 21:59:06.266722 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 21:59:06.266829 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 21:59:06.266964 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 21:59:06.267093 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 21:59:06.267221 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 12 21:59:06.267339 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 21:59:06.267520 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 21:59:06.267645 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 12 21:59:06.267764 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 12 21:59:06.267895 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 12 21:59:06.268015 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 12 21:59:06.268132 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 12 21:59:06.268245 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 12 21:59:06.268361 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 10742 usecs Feb 12 21:59:06.287646 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 12 21:59:06.287838 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 12 21:59:06.287978 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 21:59:06.288105 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 21:59:06.288240 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 21:59:06.288368 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 12 21:59:06.288517 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 21:59:06.288645 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 12 21:59:06.288668 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 21:59:06.288684 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 21:59:06.288699 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 21:59:06.288714 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 21:59:06.288729 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 21:59:06.288745 kernel: iommu: Default domain type: Translated Feb 12 21:59:06.288760 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 21:59:06.288885 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 12 21:59:06.289011 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 21:59:06.289141 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 12 21:59:06.289160 kernel: vgaarb: loaded Feb 12 21:59:06.289175 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 21:59:06.289191 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 21:59:06.289206 kernel: PTP clock support registered Feb 12 21:59:06.289221 kernel: PCI: Using ACPI for IRQ routing Feb 12 21:59:06.289235 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 21:59:06.289250 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 21:59:06.289267 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 12 21:59:06.289282 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 12 21:59:06.289297 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 12 21:59:06.289312 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 21:59:06.289326 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 21:59:06.289342 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 21:59:06.289357 kernel: pnp: PnP ACPI init Feb 12 21:59:06.289371 kernel: pnp: PnP ACPI: found 5 devices Feb 12 21:59:06.289386 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 21:59:06.289403 kernel: NET: Registered PF_INET protocol family Feb 12 21:59:06.289419 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 21:59:06.289444 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 21:59:06.289459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 21:59:06.289475 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 21:59:06.289490 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 21:59:06.289505 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 21:59:06.289520 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:59:06.289535 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:59:06.289553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 21:59:06.289568 kernel: NET: Registered PF_XDP protocol family Feb 12 21:59:06.289691 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 21:59:06.289874 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 21:59:06.290002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 21:59:06.290115 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 21:59:06.290342 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 21:59:06.290489 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 21:59:06.290514 kernel: PCI: CLS 0 bytes, default 64 Feb 12 21:59:06.290530 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 21:59:06.290546 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 12 21:59:06.290562 kernel: clocksource: Switched to clocksource tsc Feb 12 21:59:06.290577 kernel: Initialise system trusted keyrings Feb 12 21:59:06.290592 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 21:59:06.290607 kernel: Key type asymmetric registered Feb 12 21:59:06.290622 kernel: Asymmetric key parser 'x509' registered Feb 12 21:59:06.290639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 21:59:06.290654 kernel: io scheduler mq-deadline registered Feb 12 21:59:06.290669 kernel: io scheduler kyber registered Feb 12 21:59:06.290684 kernel: io scheduler bfq registered Feb 12 21:59:06.290699 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 21:59:06.290714 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 21:59:06.290728 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 21:59:06.290743 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 21:59:06.290758 kernel: i8042: Warning: Keylock active Feb 12 21:59:06.290776 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 21:59:06.290790 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 21:59:06.290925 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 12 21:59:06.291045 kernel: rtc_cmos 00:00: registered as rtc0 Feb 12 21:59:06.291160 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T21:59:05 UTC (1707775145) Feb 12 21:59:06.291274 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 12 21:59:06.291293 kernel: intel_pstate: CPU model not supported Feb 12 21:59:06.291308 kernel: NET: Registered PF_INET6 protocol family Feb 12 21:59:06.291325 kernel: Segment Routing with IPv6 Feb 12 21:59:06.291402 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 21:59:06.291416 kernel: NET: Registered PF_PACKET protocol family Feb 12 21:59:06.299504 kernel: Key type dns_resolver registered Feb 12 21:59:06.299533 kernel: IPI shorthand broadcast: enabled Feb 12 21:59:06.299549 kernel: sched_clock: Marking stable (451231622, 423551314)->(999648006, -124865070) Feb 12 21:59:06.299564 kernel: registered taskstats version 1 Feb 12 21:59:06.299578 kernel: Loading compiled-in X.509 certificates Feb 12 21:59:06.299592 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 21:59:06.299616 kernel: Key type .fscrypt registered Feb 12 21:59:06.299629 kernel: Key type fscrypt-provisioning registered Feb 12 21:59:06.299644 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 21:59:06.299657 kernel: ima: Allocated hash algorithm: sha1 Feb 12 21:59:06.299671 kernel: ima: No architecture policies found Feb 12 21:59:06.299685 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 21:59:06.299699 kernel: Write protecting the kernel read-only data: 28672k Feb 12 21:59:06.299712 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 21:59:06.299726 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 21:59:06.299742 kernel: Run /init as init process Feb 12 21:59:06.299756 kernel: with arguments: Feb 12 21:59:06.299771 kernel: /init Feb 12 21:59:06.299793 kernel: with environment: Feb 12 21:59:06.299806 kernel: HOME=/ Feb 12 21:59:06.299819 kernel: TERM=linux Feb 12 21:59:06.299833 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 21:59:06.299853 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:59:06.299873 systemd[1]: Detected virtualization amazon. Feb 12 21:59:06.299887 systemd[1]: Detected architecture x86-64. Feb 12 21:59:06.299901 systemd[1]: Running in initrd. Feb 12 21:59:06.299915 systemd[1]: No hostname configured, using default hostname. Feb 12 21:59:06.299943 systemd[1]: Hostname set to . Feb 12 21:59:06.299963 systemd[1]: Initializing machine ID from VM UUID. Feb 12 21:59:06.299978 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 21:59:06.299994 systemd[1]: Queued start job for default target initrd.target. Feb 12 21:59:06.300008 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:59:06.300022 systemd[1]: Reached target cryptsetup.target. Feb 12 21:59:06.300037 systemd[1]: Reached target paths.target. Feb 12 21:59:06.300052 systemd[1]: Reached target slices.target. Feb 12 21:59:06.300066 systemd[1]: Reached target swap.target. Feb 12 21:59:06.300081 systemd[1]: Reached target timers.target. Feb 12 21:59:06.300099 systemd[1]: Listening on iscsid.socket. Feb 12 21:59:06.300113 systemd[1]: Listening on iscsiuio.socket. Feb 12 21:59:06.300127 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:59:06.300143 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:59:06.300159 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:59:06.300174 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:59:06.300190 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:59:06.300208 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:59:06.300231 systemd[1]: Reached target sockets.target. Feb 12 21:59:06.300249 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:59:06.300266 systemd[1]: Finished network-cleanup.service. Feb 12 21:59:06.300282 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 21:59:06.300297 systemd[1]: Starting systemd-journald.service... Feb 12 21:59:06.300311 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:59:06.300325 systemd[1]: Starting systemd-resolved.service... Feb 12 21:59:06.300339 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 21:59:06.300354 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:59:06.300378 systemd-journald[184]: Journal started Feb 12 21:59:06.300488 systemd-journald[184]: Runtime Journal (/run/log/journal/ec2146a4016b96801d3ca06f7838a55d) is 4.8M, max 38.7M, 33.9M free. Feb 12 21:59:06.272869 systemd-modules-load[185]: Inserted module 'overlay' Feb 12 21:59:06.422972 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 21:59:06.423004 kernel: Bridge firewalling registered Feb 12 21:59:06.423022 kernel: SCSI subsystem initialized Feb 12 21:59:06.423038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 21:59:06.423059 kernel: device-mapper: uevent: version 1.0.3 Feb 12 21:59:06.423079 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 21:59:06.279185 systemd-resolved[186]: Positive Trust Anchors: Feb 12 21:59:06.433768 systemd[1]: Started systemd-resolved.service. Feb 12 21:59:06.433811 kernel: audit: type=1130 audit(1707775146.425:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.279197 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:59:06.279250 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:59:06.462099 systemd[1]: Started systemd-journald.service. Feb 12 21:59:06.292790 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 12 21:59:06.469783 kernel: audit: type=1130 audit(1707775146.447:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.308957 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 12 21:59:06.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.476799 kernel: audit: type=1130 audit(1707775146.468:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.352630 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 12 21:59:06.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.470233 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 21:59:06.488834 kernel: audit: type=1130 audit(1707775146.475:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.488864 kernel: audit: type=1130 audit(1707775146.482:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.477519 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:59:06.490029 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 21:59:06.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.492618 systemd[1]: Reached target nss-lookup.target. Feb 12 21:59:06.499910 kernel: audit: type=1130 audit(1707775146.491:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.502934 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 21:59:06.506797 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:59:06.511222 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:59:06.532923 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:59:06.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.541459 kernel: audit: type=1130 audit(1707775146.531:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.543371 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:59:06.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.548445 kernel: audit: type=1130 audit(1707775146.542:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.552757 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 21:59:06.553971 systemd[1]: Starting dracut-cmdline.service... Feb 12 21:59:06.563285 kernel: audit: type=1130 audit(1707775146.551:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.566583 dracut-cmdline[207]: dracut-dracut-053 Feb 12 21:59:06.570512 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:59:06.661456 kernel: Loading iSCSI transport class v2.0-870. Feb 12 21:59:06.680590 kernel: iscsi: registered transport (tcp) Feb 12 21:59:06.708457 kernel: iscsi: registered transport (qla4xxx) Feb 12 21:59:06.708592 kernel: QLogic iSCSI HBA Driver Feb 12 21:59:06.769001 systemd[1]: Finished dracut-cmdline.service. Feb 12 21:59:06.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:06.772918 systemd[1]: Starting dracut-pre-udev.service... Feb 12 21:59:06.834475 kernel: raid6: avx512x4 gen() 14670 MB/s Feb 12 21:59:06.854883 kernel: raid6: avx512x4 xor() 4553 MB/s Feb 12 21:59:06.875777 kernel: raid6: avx512x2 gen() 4290 MB/s Feb 12 21:59:06.895042 kernel: raid6: avx512x2 xor() 10847 MB/s Feb 12 21:59:06.913545 kernel: raid6: avx512x1 gen() 11438 MB/s Feb 12 21:59:06.930477 kernel: raid6: avx512x1 xor() 15405 MB/s Feb 12 21:59:06.947472 kernel: raid6: avx2x4 gen() 14037 MB/s Feb 12 21:59:06.965464 kernel: raid6: avx2x4 xor() 4309 MB/s Feb 12 21:59:06.983466 kernel: raid6: avx2x2 gen() 11334 MB/s Feb 12 21:59:07.000468 kernel: raid6: avx2x2 xor() 12350 MB/s Feb 12 21:59:07.018583 kernel: raid6: avx2x1 gen() 6539 MB/s Feb 12 21:59:07.035475 kernel: raid6: avx2x1 xor() 9366 MB/s Feb 12 21:59:07.053485 kernel: raid6: sse2x4 gen() 7920 MB/s Feb 12 21:59:07.071468 kernel: raid6: sse2x4 xor() 4643 MB/s Feb 12 21:59:07.089494 kernel: raid6: sse2x2 gen() 7766 MB/s Feb 12 21:59:07.107494 kernel: raid6: sse2x2 xor() 4260 MB/s Feb 12 21:59:07.125466 kernel: raid6: sse2x1 gen() 6827 MB/s Feb 12 21:59:07.146221 kernel: raid6: sse2x1 xor() 1550 MB/s Feb 12 21:59:07.146307 kernel: raid6: using algorithm avx512x4 gen() 14670 MB/s Feb 12 21:59:07.146325 kernel: raid6: .... xor() 4553 MB/s, rmw enabled Feb 12 21:59:07.148506 kernel: raid6: using avx512x2 recovery algorithm Feb 12 21:59:07.171477 kernel: xor: automatically using best checksumming function avx Feb 12 21:59:07.348496 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 21:59:07.395238 systemd[1]: Finished dracut-pre-udev.service. Feb 12 21:59:07.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:07.398000 audit: BPF prog-id=7 op=LOAD Feb 12 21:59:07.398000 audit: BPF prog-id=8 op=LOAD Feb 12 21:59:07.406404 systemd[1]: Starting systemd-udevd.service... Feb 12 21:59:07.428577 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 21:59:07.436179 systemd[1]: Started systemd-udevd.service. Feb 12 21:59:07.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:07.446914 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 21:59:07.468755 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Feb 12 21:59:07.511566 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 21:59:07.514128 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:59:07.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:07.570619 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:59:07.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:07.684288 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 21:59:07.684552 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 21:59:07.684687 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 12 21:59:07.694953 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:74:5a:ac:fc:43 Feb 12 21:59:07.699489 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:59:07.880281 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 21:59:07.880314 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 21:59:07.880620 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 21:59:07.880640 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 21:59:07.880655 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 21:59:07.880938 kernel: AES CTR mode by8 optimization enabled Feb 12 21:59:07.880967 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 21:59:07.880997 kernel: GPT:9289727 != 16777215 Feb 12 21:59:07.881018 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 21:59:07.881034 kernel: GPT:9289727 != 16777215 Feb 12 21:59:07.881050 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 21:59:07.881066 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:59:07.881083 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Feb 12 21:59:07.885065 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 21:59:07.932033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 21:59:07.959830 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 21:59:07.964024 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 21:59:07.974681 systemd[1]: Starting disk-uuid.service... Feb 12 21:59:07.985763 disk-uuid[592]: Primary Header is updated. Feb 12 21:59:07.985763 disk-uuid[592]: Secondary Entries is updated. Feb 12 21:59:07.985763 disk-uuid[592]: Secondary Header is updated. Feb 12 21:59:07.999445 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:59:07.999738 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 21:59:08.007452 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:59:09.010567 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:59:09.011912 disk-uuid[593]: The operation has completed successfully. Feb 12 21:59:09.165091 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 21:59:09.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.165322 systemd[1]: Finished disk-uuid.service. Feb 12 21:59:09.175761 systemd[1]: Starting verity-setup.service... Feb 12 21:59:09.226605 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 21:59:09.308044 systemd[1]: Found device dev-mapper-usr.device. Feb 12 21:59:09.313175 systemd[1]: Mounting sysusr-usr.mount... Feb 12 21:59:09.317635 systemd[1]: Finished verity-setup.service. Feb 12 21:59:09.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.418453 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 21:59:09.419066 systemd[1]: Mounted sysusr-usr.mount. Feb 12 21:59:09.420905 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 21:59:09.423512 systemd[1]: Starting ignition-setup.service... Feb 12 21:59:09.425979 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 21:59:09.445974 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:59:09.446046 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:59:09.446065 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:59:09.456453 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:59:09.471153 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 21:59:09.513452 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 21:59:09.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.515000 audit: BPF prog-id=9 op=LOAD Feb 12 21:59:09.517811 systemd[1]: Starting systemd-networkd.service... Feb 12 21:59:09.532880 systemd[1]: Finished ignition-setup.service. Feb 12 21:59:09.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.536270 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 21:59:09.558206 systemd-networkd[1019]: lo: Link UP Feb 12 21:59:09.558220 systemd-networkd[1019]: lo: Gained carrier Feb 12 21:59:09.558922 systemd-networkd[1019]: Enumeration completed Feb 12 21:59:09.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.559043 systemd[1]: Started systemd-networkd.service. Feb 12 21:59:09.559566 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 21:59:09.563200 systemd[1]: Reached target network.target. Feb 12 21:59:09.569598 systemd[1]: Starting iscsiuio.service... Feb 12 21:59:09.571630 systemd-networkd[1019]: eth0: Link UP Feb 12 21:59:09.571637 systemd-networkd[1019]: eth0: Gained carrier Feb 12 21:59:09.578814 systemd[1]: Started iscsiuio.service. Feb 12 21:59:09.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.580270 systemd[1]: Starting iscsid.service... Feb 12 21:59:09.586441 iscsid[1026]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:59:09.586441 iscsid[1026]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 21:59:09.586441 iscsid[1026]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 21:59:09.586441 iscsid[1026]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 21:59:09.586441 iscsid[1026]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:59:09.586441 iscsid[1026]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 21:59:09.587822 systemd[1]: Started iscsid.service. Feb 12 21:59:09.605243 systemd-networkd[1019]: eth0: DHCPv4 address 172.31.16.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 21:59:09.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.610333 systemd[1]: Starting dracut-initqueue.service... Feb 12 21:59:09.628758 systemd[1]: Finished dracut-initqueue.service. Feb 12 21:59:09.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:09.632994 systemd[1]: Reached target remote-fs-pre.target. Feb 12 21:59:09.638475 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:59:09.641962 systemd[1]: Reached target remote-fs.target. Feb 12 21:59:09.648399 systemd[1]: Starting dracut-pre-mount.service... Feb 12 21:59:09.665386 systemd[1]: Finished dracut-pre-mount.service. Feb 12 21:59:09.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.240742 ignition[1021]: Ignition 2.14.0 Feb 12 21:59:10.240757 ignition[1021]: Stage: fetch-offline Feb 12 21:59:10.240897 ignition[1021]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:10.240951 ignition[1021]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:10.263149 ignition[1021]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:10.263803 ignition[1021]: Ignition finished successfully Feb 12 21:59:10.267085 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 21:59:10.278032 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 21:59:10.278100 kernel: audit: type=1130 audit(1707775150.267:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.271878 systemd[1]: Starting ignition-fetch.service... Feb 12 21:59:10.288848 ignition[1045]: Ignition 2.14.0 Feb 12 21:59:10.288862 ignition[1045]: Stage: fetch Feb 12 21:59:10.289118 ignition[1045]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:10.289189 ignition[1045]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:10.303932 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:10.305577 ignition[1045]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:10.313860 ignition[1045]: INFO : PUT result: OK Feb 12 21:59:10.318421 ignition[1045]: DEBUG : parsed url from cmdline: "" Feb 12 21:59:10.324556 ignition[1045]: INFO : no config URL provided Feb 12 21:59:10.324556 ignition[1045]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 21:59:10.324556 ignition[1045]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 21:59:10.324556 ignition[1045]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:10.345529 ignition[1045]: INFO : PUT result: OK Feb 12 21:59:10.345529 ignition[1045]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 21:59:10.349379 ignition[1045]: INFO : GET result: OK Feb 12 21:59:10.349379 ignition[1045]: DEBUG : parsing config with SHA512: 5738fd8abcddbec9bab4624988458aae2d0b755e220fe76b12edc699b24e573d8efb511f9361ced907eb63692ec514cd6ed3b036766bd664ae1937170e31e822 Feb 12 21:59:10.389283 unknown[1045]: fetched base config from "system" Feb 12 21:59:10.390728 unknown[1045]: fetched base config from "system" Feb 12 21:59:10.390741 unknown[1045]: fetched user config from "aws" Feb 12 21:59:10.394882 ignition[1045]: fetch: fetch complete Feb 12 21:59:10.394895 ignition[1045]: fetch: fetch passed Feb 12 21:59:10.394964 ignition[1045]: Ignition finished successfully Feb 12 21:59:10.399517 systemd[1]: Finished ignition-fetch.service. Feb 12 21:59:10.401077 systemd[1]: Starting ignition-kargs.service... Feb 12 21:59:10.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.412513 kernel: audit: type=1130 audit(1707775150.398:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.425713 ignition[1051]: Ignition 2.14.0 Feb 12 21:59:10.425727 ignition[1051]: Stage: kargs Feb 12 21:59:10.426057 ignition[1051]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:10.426097 ignition[1051]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:10.436515 ignition[1051]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:10.437969 ignition[1051]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:10.440026 ignition[1051]: INFO : PUT result: OK Feb 12 21:59:10.444081 ignition[1051]: kargs: kargs passed Feb 12 21:59:10.444138 ignition[1051]: Ignition finished successfully Feb 12 21:59:10.446171 systemd[1]: Finished ignition-kargs.service. Feb 12 21:59:10.447354 systemd[1]: Starting ignition-disks.service... Feb 12 21:59:10.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.459871 kernel: audit: type=1130 audit(1707775150.444:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.464226 ignition[1057]: Ignition 2.14.0 Feb 12 21:59:10.464413 ignition[1057]: Stage: disks Feb 12 21:59:10.465712 ignition[1057]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:10.465748 ignition[1057]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:10.485010 ignition[1057]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:10.487339 ignition[1057]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:10.489234 ignition[1057]: INFO : PUT result: OK Feb 12 21:59:10.494236 ignition[1057]: disks: disks passed Feb 12 21:59:10.494338 ignition[1057]: Ignition finished successfully Feb 12 21:59:10.496741 systemd[1]: Finished ignition-disks.service. Feb 12 21:59:10.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.498932 systemd[1]: Reached target initrd-root-device.target. Feb 12 21:59:10.509634 kernel: audit: type=1130 audit(1707775150.497:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.505036 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:59:10.506493 systemd[1]: Reached target local-fs.target. Feb 12 21:59:10.507648 systemd[1]: Reached target sysinit.target. Feb 12 21:59:10.509515 systemd[1]: Reached target basic.target. Feb 12 21:59:10.516583 systemd[1]: Starting systemd-fsck-root.service... Feb 12 21:59:10.543174 systemd-fsck[1065]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 21:59:10.550378 systemd[1]: Finished systemd-fsck-root.service. Feb 12 21:59:10.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.554784 systemd[1]: Mounting sysroot.mount... Feb 12 21:59:10.563573 kernel: audit: type=1130 audit(1707775150.551:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.575467 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 21:59:10.575759 systemd[1]: Mounted sysroot.mount. Feb 12 21:59:10.577703 systemd[1]: Reached target initrd-root-fs.target. Feb 12 21:59:10.590035 systemd[1]: Mounting sysroot-usr.mount... Feb 12 21:59:10.591888 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 21:59:10.591958 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 21:59:10.591993 systemd[1]: Reached target ignition-diskful.target. Feb 12 21:59:10.603896 systemd[1]: Mounted sysroot-usr.mount. Feb 12 21:59:10.615164 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:59:10.619545 systemd[1]: Starting initrd-setup-root.service... Feb 12 21:59:10.631522 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1082) Feb 12 21:59:10.638667 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:59:10.638727 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:59:10.638739 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:59:10.643452 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:59:10.646561 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:59:10.648210 initrd-setup-root[1087]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 21:59:10.670646 initrd-setup-root[1113]: cut: /sysroot/etc/group: No such file or directory Feb 12 21:59:10.677817 initrd-setup-root[1121]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 21:59:10.685103 initrd-setup-root[1129]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 21:59:10.829403 systemd[1]: Finished initrd-setup-root.service. Feb 12 21:59:10.838858 kernel: audit: type=1130 audit(1707775150.829:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.829602 systemd-networkd[1019]: eth0: Gained IPv6LL Feb 12 21:59:10.839324 systemd[1]: Starting ignition-mount.service... Feb 12 21:59:10.841897 systemd[1]: Starting sysroot-boot.service... Feb 12 21:59:10.857140 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 21:59:10.857244 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 21:59:10.878369 ignition[1148]: INFO : Ignition 2.14.0 Feb 12 21:59:10.879691 ignition[1148]: INFO : Stage: mount Feb 12 21:59:10.879691 ignition[1148]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:10.879691 ignition[1148]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:10.894231 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:10.896847 ignition[1148]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:10.901564 systemd[1]: Finished sysroot-boot.service. Feb 12 21:59:10.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.909182 ignition[1148]: INFO : PUT result: OK Feb 12 21:59:10.910201 kernel: audit: type=1130 audit(1707775150.900:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.912997 ignition[1148]: INFO : mount: mount passed Feb 12 21:59:10.914523 ignition[1148]: INFO : Ignition finished successfully Feb 12 21:59:10.916282 systemd[1]: Finished ignition-mount.service. Feb 12 21:59:10.924585 kernel: audit: type=1130 audit(1707775150.916:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:10.918972 systemd[1]: Starting ignition-files.service... Feb 12 21:59:10.932417 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:59:10.947460 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1157) Feb 12 21:59:10.952102 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:59:10.952171 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:59:10.952189 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:59:10.959451 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:59:10.961620 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:59:10.975425 ignition[1176]: INFO : Ignition 2.14.0 Feb 12 21:59:10.975425 ignition[1176]: INFO : Stage: files Feb 12 21:59:10.979083 ignition[1176]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:10.979083 ignition[1176]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:10.994005 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:10.995562 ignition[1176]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:10.997638 ignition[1176]: INFO : PUT result: OK Feb 12 21:59:11.001932 ignition[1176]: DEBUG : files: compiled without relabeling support, skipping Feb 12 21:59:11.006292 ignition[1176]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 21:59:11.008221 ignition[1176]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 21:59:11.020254 ignition[1176]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 21:59:11.022648 ignition[1176]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 21:59:11.026195 unknown[1176]: wrote ssh authorized keys file for user: core Feb 12 21:59:11.029780 ignition[1176]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 21:59:11.033055 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 21:59:11.035760 ignition[1176]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 21:59:11.388938 ignition[1176]: INFO : GET result: OK Feb 12 21:59:11.699670 ignition[1176]: DEBUG : file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 21:59:11.702843 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 21:59:11.702843 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 21:59:11.702843 ignition[1176]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 21:59:11.977112 ignition[1176]: INFO : GET result: OK Feb 12 21:59:12.129753 ignition[1176]: DEBUG : file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 21:59:12.134730 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 21:59:12.134730 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 21:59:12.134730 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:59:12.148221 ignition[1176]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1208799429" Feb 12 21:59:12.153486 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1181) Feb 12 21:59:12.153520 ignition[1176]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1208799429": device or resource busy Feb 12 21:59:12.153520 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1208799429", trying btrfs: device or resource busy Feb 12 21:59:12.153520 ignition[1176]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1208799429" Feb 12 21:59:12.163671 ignition[1176]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1208799429" Feb 12 21:59:12.163671 ignition[1176]: INFO : op(3): [started] unmounting "/mnt/oem1208799429" Feb 12 21:59:12.167095 ignition[1176]: INFO : op(3): [finished] unmounting "/mnt/oem1208799429" Feb 12 21:59:12.167095 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 21:59:12.167095 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:59:12.167095 ignition[1176]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 21:59:12.181360 systemd[1]: mnt-oem1208799429.mount: Deactivated successfully. Feb 12 21:59:12.282545 ignition[1176]: INFO : GET result: OK Feb 12 21:59:12.750270 ignition[1176]: DEBUG : file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 21:59:12.756095 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:59:12.756095 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:59:12.756095 ignition[1176]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 21:59:12.815874 ignition[1176]: INFO : GET result: OK Feb 12 21:59:13.503600 ignition[1176]: DEBUG : file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 21:59:13.507032 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:59:13.507032 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 12 21:59:13.507032 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 21:59:13.507032 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:59:13.520308 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:59:13.520308 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:59:13.525210 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:59:13.525210 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 21:59:13.530024 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:59:13.540970 ignition[1176]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666644206" Feb 12 21:59:13.543457 ignition[1176]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666644206": device or resource busy Feb 12 21:59:13.543457 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1666644206", trying btrfs: device or resource busy Feb 12 21:59:13.543457 ignition[1176]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666644206" Feb 12 21:59:13.543457 ignition[1176]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666644206" Feb 12 21:59:13.543457 ignition[1176]: INFO : op(6): [started] unmounting "/mnt/oem1666644206" Feb 12 21:59:13.543457 ignition[1176]: INFO : op(6): [finished] unmounting "/mnt/oem1666644206" Feb 12 21:59:13.543457 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 21:59:13.543457 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 21:59:13.576564 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:59:13.554696 systemd[1]: mnt-oem1666644206.mount: Deactivated successfully. Feb 12 21:59:13.602286 ignition[1176]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2605246330" Feb 12 21:59:13.604122 ignition[1176]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2605246330": device or resource busy Feb 12 21:59:13.604122 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2605246330", trying btrfs: device or resource busy Feb 12 21:59:13.604122 ignition[1176]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2605246330" Feb 12 21:59:13.611012 ignition[1176]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2605246330" Feb 12 21:59:13.611012 ignition[1176]: INFO : op(9): [started] unmounting "/mnt/oem2605246330" Feb 12 21:59:13.611012 ignition[1176]: INFO : op(9): [finished] unmounting "/mnt/oem2605246330" Feb 12 21:59:13.611012 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 21:59:13.611012 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 21:59:13.611012 ignition[1176]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:59:13.634977 ignition[1176]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576663994" Feb 12 21:59:13.637165 ignition[1176]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576663994": device or resource busy Feb 12 21:59:13.637165 ignition[1176]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem576663994", trying btrfs: device or resource busy Feb 12 21:59:13.637165 ignition[1176]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576663994" Feb 12 21:59:13.650675 ignition[1176]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem576663994" Feb 12 21:59:13.654348 ignition[1176]: INFO : op(c): [started] unmounting "/mnt/oem576663994" Feb 12 21:59:13.652748 systemd[1]: mnt-oem576663994.mount: Deactivated successfully. Feb 12 21:59:13.661625 ignition[1176]: INFO : op(c): [finished] unmounting "/mnt/oem576663994" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(10): [started] processing unit "nvidia.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(10): [finished] processing unit "nvidia.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 21:59:13.663727 ignition[1176]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 21:59:13.722658 kernel: audit: type=1130 audit(1707775153.704:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.701723 systemd[1]: Finished ignition-files.service. Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:59:13.731843 ignition[1176]: INFO : files: files passed Feb 12 21:59:13.731843 ignition[1176]: INFO : Ignition finished successfully Feb 12 21:59:13.768355 kernel: audit: type=1130 audit(1707775153.755:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.721148 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 21:59:13.723592 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 21:59:13.771555 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 21:59:13.726008 systemd[1]: Starting ignition-quench.service... Feb 12 21:59:13.734662 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 21:59:13.734795 systemd[1]: Finished ignition-quench.service. Feb 12 21:59:13.758921 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 21:59:13.767313 systemd[1]: Reached target ignition-complete.target. Feb 12 21:59:13.771481 systemd[1]: Starting initrd-parse-etc.service... Feb 12 21:59:13.798898 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 21:59:13.799018 systemd[1]: Finished initrd-parse-etc.service. Feb 12 21:59:13.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.805286 systemd[1]: Reached target initrd-fs.target. Feb 12 21:59:13.808485 systemd[1]: Reached target initrd.target. Feb 12 21:59:13.810943 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 21:59:13.814247 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 21:59:13.831887 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 21:59:13.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.835137 systemd[1]: Starting initrd-cleanup.service... Feb 12 21:59:13.854079 systemd[1]: Stopped target nss-lookup.target. Feb 12 21:59:13.854518 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 21:59:13.862257 systemd[1]: Stopped target timers.target. Feb 12 21:59:13.865567 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 21:59:13.866885 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 21:59:13.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.869700 systemd[1]: Stopped target initrd.target. Feb 12 21:59:13.873521 systemd[1]: Stopped target basic.target. Feb 12 21:59:13.880051 systemd[1]: Stopped target ignition-complete.target. Feb 12 21:59:13.883752 systemd[1]: Stopped target ignition-diskful.target. Feb 12 21:59:13.886168 systemd[1]: Stopped target initrd-root-device.target. Feb 12 21:59:13.889224 systemd[1]: Stopped target remote-fs.target. Feb 12 21:59:13.891901 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 21:59:13.892255 systemd[1]: Stopped target sysinit.target. Feb 12 21:59:13.898476 systemd[1]: Stopped target local-fs.target. Feb 12 21:59:13.901655 systemd[1]: Stopped target local-fs-pre.target. Feb 12 21:59:13.904152 systemd[1]: Stopped target swap.target. Feb 12 21:59:13.905951 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 21:59:13.907917 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 21:59:13.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.910730 systemd[1]: Stopped target cryptsetup.target. Feb 12 21:59:13.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.911934 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 21:59:13.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.912048 systemd[1]: Stopped dracut-initqueue.service. Feb 12 21:59:13.913291 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 21:59:13.913417 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 21:59:13.918677 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 21:59:13.918834 systemd[1]: Stopped ignition-files.service. Feb 12 21:59:13.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.948682 iscsid[1026]: iscsid shutting down. Feb 12 21:59:13.929283 systemd[1]: Stopping ignition-mount.service... Feb 12 21:59:13.954214 ignition[1214]: INFO : Ignition 2.14.0 Feb 12 21:59:13.954214 ignition[1214]: INFO : Stage: umount Feb 12 21:59:13.954214 ignition[1214]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:59:13.954214 ignition[1214]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:59:13.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:13.930598 systemd[1]: Stopping iscsid.service... Feb 12 21:59:13.933982 systemd[1]: Stopping sysroot-boot.service... Feb 12 21:59:13.938132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 21:59:13.938598 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 21:59:13.952980 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 21:59:13.954182 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 21:59:13.957377 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 21:59:13.957717 systemd[1]: Stopped iscsid.service. Feb 12 21:59:13.962924 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 21:59:13.968491 systemd[1]: Finished initrd-cleanup.service. Feb 12 21:59:13.978376 systemd[1]: Stopping iscsiuio.service... Feb 12 21:59:14.010160 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 21:59:14.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.014286 ignition[1214]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:59:14.014286 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:59:14.014286 ignition[1214]: INFO : PUT result: OK Feb 12 21:59:14.010254 systemd[1]: Stopped iscsiuio.service. Feb 12 21:59:14.022991 ignition[1214]: INFO : umount: umount passed Feb 12 21:59:14.022991 ignition[1214]: INFO : Ignition finished successfully Feb 12 21:59:14.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.022834 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 21:59:14.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.022930 systemd[1]: Stopped sysroot-boot.service. Feb 12 21:59:14.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.026865 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 21:59:14.026957 systemd[1]: Stopped ignition-mount.service. Feb 12 21:59:14.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.034166 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 21:59:14.034276 systemd[1]: Stopped ignition-disks.service. Feb 12 21:59:14.035557 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 21:59:14.035631 systemd[1]: Stopped ignition-kargs.service. Feb 12 21:59:14.037457 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 21:59:14.037515 systemd[1]: Stopped ignition-fetch.service. Feb 12 21:59:14.040180 systemd[1]: Stopped target network.target. Feb 12 21:59:14.042965 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 21:59:14.043171 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 21:59:14.044837 systemd[1]: Stopped target paths.target. Feb 12 21:59:14.047531 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 21:59:14.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.051490 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 21:59:14.054922 systemd[1]: Stopped target slices.target. Feb 12 21:59:14.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.057412 systemd[1]: Stopped target sockets.target. Feb 12 21:59:14.057549 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 21:59:14.057595 systemd[1]: Closed iscsid.socket. Feb 12 21:59:14.063182 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 21:59:14.063229 systemd[1]: Closed iscsiuio.socket. Feb 12 21:59:14.068539 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 21:59:14.068601 systemd[1]: Stopped ignition-setup.service. Feb 12 21:59:14.072075 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 21:59:14.072150 systemd[1]: Stopped initrd-setup-root.service. Feb 12 21:59:14.076527 systemd[1]: Stopping systemd-networkd.service... Feb 12 21:59:14.079182 systemd[1]: Stopping systemd-resolved.service... Feb 12 21:59:14.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.087633 systemd-networkd[1019]: eth0: DHCPv6 lease lost Feb 12 21:59:14.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.113000 audit: BPF prog-id=9 op=UNLOAD Feb 12 21:59:14.098107 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 21:59:14.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.098245 systemd[1]: Stopped systemd-networkd.service. Feb 12 21:59:14.100677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 21:59:14.100724 systemd[1]: Closed systemd-networkd.socket. Feb 12 21:59:14.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.103080 systemd[1]: Stopping network-cleanup.service... Feb 12 21:59:14.106573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 21:59:14.106665 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 21:59:14.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.139000 audit: BPF prog-id=6 op=UNLOAD Feb 12 21:59:14.109205 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:59:14.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.109515 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:59:14.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.114768 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 21:59:14.114846 systemd[1]: Stopped systemd-modules-load.service. Feb 12 21:59:14.116358 systemd[1]: Stopping systemd-udevd.service... Feb 12 21:59:14.121163 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 21:59:14.126551 systemd[1]: Stopped systemd-resolved.service. Feb 12 21:59:14.138354 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 21:59:14.138575 systemd[1]: Stopped systemd-udevd.service. Feb 12 21:59:14.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.141552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 21:59:14.141608 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 21:59:14.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.142667 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 21:59:14.142707 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 21:59:14.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.144014 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 21:59:14.144082 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 21:59:14.146290 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 21:59:14.146354 systemd[1]: Stopped dracut-cmdline.service. Feb 12 21:59:14.147460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 21:59:14.147517 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 21:59:14.150814 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 21:59:14.161166 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 21:59:14.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.161272 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 21:59:14.163720 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 21:59:14.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:14.163807 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 21:59:14.167975 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 21:59:14.168385 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 21:59:14.178837 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 21:59:14.180008 systemd[1]: Stopped network-cleanup.service. Feb 12 21:59:14.183543 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 21:59:14.183632 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 21:59:14.187406 systemd[1]: Reached target initrd-switch-root.target. Feb 12 21:59:14.189725 systemd[1]: Starting initrd-switch-root.service... Feb 12 21:59:14.207413 systemd[1]: Switching root. Feb 12 21:59:14.230533 systemd-journald[184]: Journal stopped Feb 12 21:59:20.004197 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 12 21:59:20.004272 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 21:59:20.004293 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 21:59:20.004312 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 21:59:20.004328 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 21:59:20.004346 kernel: SELinux: policy capability open_perms=1 Feb 12 21:59:20.004366 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 21:59:20.004383 kernel: SELinux: policy capability always_check_network=0 Feb 12 21:59:20.004401 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 21:59:20.004423 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 21:59:20.004675 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 21:59:20.004694 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 21:59:20.004712 systemd[1]: Successfully loaded SELinux policy in 80.346ms. Feb 12 21:59:20.004757 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.122ms. Feb 12 21:59:20.004779 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:59:20.004798 systemd[1]: Detected virtualization amazon. Feb 12 21:59:20.004816 systemd[1]: Detected architecture x86-64. Feb 12 21:59:20.004835 systemd[1]: Detected first boot. Feb 12 21:59:20.004859 systemd[1]: Initializing machine ID from VM UUID. Feb 12 21:59:20.004878 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 21:59:20.004895 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 12 21:59:20.004918 kernel: audit: type=1400 audit(1707775155.408:87): avc: denied { associate } for pid=1248 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 21:59:20.004937 kernel: audit: type=1300 audit(1707775155.408:87): arch=c000003e syscall=188 success=yes exit=0 a0=c0001178dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=1231 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:20.004955 kernel: audit: type=1327 audit(1707775155.408:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:59:20.004976 kernel: audit: type=1400 audit(1707775155.411:88): avc: denied { associate } for pid=1248 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 21:59:20.004995 kernel: audit: type=1300 audit(1707775155.411:88): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b5 a2=1ed a3=0 items=2 ppid=1231 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:20.005012 kernel: audit: type=1307 audit(1707775155.411:88): cwd="/" Feb 12 21:59:20.005029 kernel: audit: type=1302 audit(1707775155.411:88): item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:20.005046 kernel: audit: type=1302 audit(1707775155.411:88): item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:20.005066 kernel: audit: type=1327 audit(1707775155.411:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:59:20.005085 systemd[1]: Populated /etc with preset unit settings. Feb 12 21:59:20.005103 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:59:20.005122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:59:20.005142 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:59:20.005160 kernel: audit: type=1334 audit(1707775159.691:89): prog-id=12 op=LOAD Feb 12 21:59:20.005181 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 21:59:20.005202 systemd[1]: Stopped initrd-switch-root.service. Feb 12 21:59:20.005220 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 21:59:20.005238 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 21:59:20.005258 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 21:59:20.005276 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 21:59:20.005295 systemd[1]: Created slice system-getty.slice. Feb 12 21:59:20.005313 systemd[1]: Created slice system-modprobe.slice. Feb 12 21:59:20.005334 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 21:59:20.005353 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 21:59:20.005371 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 21:59:20.005388 systemd[1]: Created slice user.slice. Feb 12 21:59:20.005409 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:59:20.005486 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 21:59:20.005506 systemd[1]: Set up automount boot.automount. Feb 12 21:59:20.005525 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 21:59:20.005543 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 21:59:20.005563 systemd[1]: Stopped target initrd-fs.target. Feb 12 21:59:20.005582 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 21:59:20.005600 systemd[1]: Reached target integritysetup.target. Feb 12 21:59:20.005618 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:59:20.005637 systemd[1]: Reached target remote-fs.target. Feb 12 21:59:20.005655 systemd[1]: Reached target slices.target. Feb 12 21:59:20.005673 systemd[1]: Reached target swap.target. Feb 12 21:59:20.005692 systemd[1]: Reached target torcx.target. Feb 12 21:59:20.005711 systemd[1]: Reached target veritysetup.target. Feb 12 21:59:20.005728 systemd[1]: Listening on systemd-coredump.socket. Feb 12 21:59:20.005752 systemd[1]: Listening on systemd-initctl.socket. Feb 12 21:59:20.005770 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:59:20.005788 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:59:20.005807 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:59:20.005826 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 21:59:20.005844 systemd[1]: Mounting dev-hugepages.mount... Feb 12 21:59:20.005862 systemd[1]: Mounting dev-mqueue.mount... Feb 12 21:59:20.005880 systemd[1]: Mounting media.mount... Feb 12 21:59:20.005899 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:59:20.005918 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 21:59:20.005934 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 21:59:20.005972 systemd[1]: Mounting tmp.mount... Feb 12 21:59:20.005992 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 21:59:20.006010 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 21:59:20.006029 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:59:20.006047 systemd[1]: Starting modprobe@configfs.service... Feb 12 21:59:20.006064 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 21:59:20.006085 systemd[1]: Starting modprobe@drm.service... Feb 12 21:59:20.006103 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 21:59:20.006121 systemd[1]: Starting modprobe@fuse.service... Feb 12 21:59:20.006140 systemd[1]: Starting modprobe@loop.service... Feb 12 21:59:20.006164 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 21:59:20.006183 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 21:59:20.006205 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 21:59:20.006223 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 21:59:20.006241 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 21:59:20.006260 systemd[1]: Stopped systemd-journald.service. Feb 12 21:59:20.006278 systemd[1]: Starting systemd-journald.service... Feb 12 21:59:20.006296 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:59:20.006314 kernel: loop: module loaded Feb 12 21:59:20.006333 systemd[1]: Starting systemd-network-generator.service... Feb 12 21:59:20.006352 systemd[1]: Starting systemd-remount-fs.service... Feb 12 21:59:20.006372 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:59:20.006391 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 21:59:20.006410 systemd[1]: Stopped verity-setup.service. Feb 12 21:59:20.006496 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:59:20.006517 systemd[1]: Mounted dev-hugepages.mount. Feb 12 21:59:20.006535 systemd[1]: Mounted dev-mqueue.mount. Feb 12 21:59:20.006553 systemd[1]: Mounted media.mount. Feb 12 21:59:20.006571 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 21:59:20.006588 kernel: fuse: init (API version 7.34) Feb 12 21:59:20.006609 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 21:59:20.006627 systemd[1]: Mounted tmp.mount. Feb 12 21:59:20.006644 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:59:20.006662 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 21:59:20.006681 systemd[1]: Finished modprobe@configfs.service. Feb 12 21:59:20.006700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 21:59:20.006719 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 21:59:20.006737 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 21:59:20.006756 systemd[1]: Finished modprobe@drm.service. Feb 12 21:59:20.006777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 21:59:20.006794 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 21:59:20.006813 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 21:59:20.006832 systemd[1]: Finished modprobe@fuse.service. Feb 12 21:59:20.006855 systemd-journald[1320]: Journal started Feb 12 21:59:20.006924 systemd-journald[1320]: Runtime Journal (/run/log/journal/ec2146a4016b96801d3ca06f7838a55d) is 4.8M, max 38.7M, 33.9M free. Feb 12 21:59:14.975000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 21:59:15.176000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 21:59:15.176000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 21:59:15.176000 audit: BPF prog-id=10 op=LOAD Feb 12 21:59:15.176000 audit: BPF prog-id=10 op=UNLOAD Feb 12 21:59:15.176000 audit: BPF prog-id=11 op=LOAD Feb 12 21:59:15.176000 audit: BPF prog-id=11 op=UNLOAD Feb 12 21:59:15.408000 audit[1248]: AVC avc: denied { associate } for pid=1248 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 21:59:15.408000 audit[1248]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=1231 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:15.408000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:59:15.411000 audit[1248]: AVC avc: denied { associate } for pid=1248 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 21:59:15.411000 audit[1248]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b5 a2=1ed a3=0 items=2 ppid=1231 pid=1248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:15.411000 audit: CWD cwd="/" Feb 12 21:59:15.411000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:15.411000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:15.411000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:59:19.691000 audit: BPF prog-id=12 op=LOAD Feb 12 21:59:19.691000 audit: BPF prog-id=3 op=UNLOAD Feb 12 21:59:19.692000 audit: BPF prog-id=13 op=LOAD Feb 12 21:59:19.692000 audit: BPF prog-id=14 op=LOAD Feb 12 21:59:19.692000 audit: BPF prog-id=4 op=UNLOAD Feb 12 21:59:19.692000 audit: BPF prog-id=5 op=UNLOAD Feb 12 21:59:19.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.702000 audit: BPF prog-id=12 op=UNLOAD Feb 12 21:59:19.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.901000 audit: BPF prog-id=15 op=LOAD Feb 12 21:59:19.901000 audit: BPF prog-id=16 op=LOAD Feb 12 21:59:19.901000 audit: BPF prog-id=17 op=LOAD Feb 12 21:59:19.901000 audit: BPF prog-id=13 op=UNLOAD Feb 12 21:59:19.901000 audit: BPF prog-id=14 op=UNLOAD Feb 12 21:59:19.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:19.997000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 21:59:19.997000 audit[1320]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd4a33a00 a2=4000 a3=7fffd4a33a9c items=0 ppid=1 pid=1320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:19.997000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 21:59:20.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:15.396384 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:59:19.690137 systemd[1]: Queued start job for default target multi-user.target. Feb 12 21:59:15.397233 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 21:59:19.695140 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 21:59:20.009907 systemd[1]: Started systemd-journald.service. Feb 12 21:59:15.397272 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 21:59:15.397321 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 21:59:15.397337 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 21:59:15.397385 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 21:59:15.397406 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 21:59:15.397747 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 21:59:15.397803 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 21:59:15.397823 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 21:59:15.407714 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 21:59:15.407877 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 21:59:15.407899 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 21:59:15.407963 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 21:59:15.407986 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 21:59:20.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:15.407999 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 21:59:19.046780 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 21:59:19.047121 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 21:59:19.047248 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 21:59:19.047638 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 21:59:19.047694 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 21:59:19.047928 /usr/lib/systemd/system-generators/torcx-generator[1248]: time="2024-02-12T21:59:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 21:59:20.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.012616 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 21:59:20.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.014622 systemd[1]: Finished modprobe@loop.service. Feb 12 21:59:20.016690 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:59:20.019057 systemd[1]: Finished systemd-network-generator.service. Feb 12 21:59:20.020829 systemd[1]: Finished systemd-remount-fs.service. Feb 12 21:59:20.022956 systemd[1]: Reached target network-pre.target. Feb 12 21:59:20.025925 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 21:59:20.028992 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 21:59:20.030124 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 21:59:20.035007 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 21:59:20.038326 systemd[1]: Starting systemd-journal-flush.service... Feb 12 21:59:20.039972 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 21:59:20.041827 systemd[1]: Starting systemd-random-seed.service... Feb 12 21:59:20.042933 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 21:59:20.044152 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:59:20.047989 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 21:59:20.052238 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 21:59:20.069648 systemd-journald[1320]: Time spent on flushing to /var/log/journal/ec2146a4016b96801d3ca06f7838a55d is 94.594ms for 1197 entries. Feb 12 21:59:20.069648 systemd-journald[1320]: System Journal (/var/log/journal/ec2146a4016b96801d3ca06f7838a55d) is 8.0M, max 195.6M, 187.6M free. Feb 12 21:59:20.187671 systemd-journald[1320]: Received client request to flush runtime journal. Feb 12 21:59:20.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.080891 systemd[1]: Finished systemd-random-seed.service. Feb 12 21:59:20.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.201367 udevadm[1359]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 21:59:20.082163 systemd[1]: Reached target first-boot-complete.target. Feb 12 21:59:20.108570 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:59:20.147558 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:59:20.151247 systemd[1]: Starting systemd-udev-settle.service... Feb 12 21:59:20.175345 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 21:59:20.178422 systemd[1]: Starting systemd-sysusers.service... Feb 12 21:59:20.194470 systemd[1]: Finished systemd-journal-flush.service. Feb 12 21:59:20.307356 systemd[1]: Finished systemd-sysusers.service. Feb 12 21:59:20.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.313521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:59:20.374123 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:59:20.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.867698 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 21:59:20.878563 kernel: kauditd_printk_skb: 46 callbacks suppressed Feb 12 21:59:20.878686 kernel: audit: type=1130 audit(1707775160.867:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.878728 kernel: audit: type=1334 audit(1707775160.869:135): prog-id=18 op=LOAD Feb 12 21:59:20.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.869000 audit: BPF prog-id=18 op=LOAD Feb 12 21:59:20.877976 systemd[1]: Starting systemd-udevd.service... Feb 12 21:59:20.875000 audit: BPF prog-id=19 op=LOAD Feb 12 21:59:20.875000 audit: BPF prog-id=7 op=UNLOAD Feb 12 21:59:20.875000 audit: BPF prog-id=8 op=UNLOAD Feb 12 21:59:20.879453 kernel: audit: type=1334 audit(1707775160.875:136): prog-id=19 op=LOAD Feb 12 21:59:20.879504 kernel: audit: type=1334 audit(1707775160.875:137): prog-id=7 op=UNLOAD Feb 12 21:59:20.879531 kernel: audit: type=1334 audit(1707775160.875:138): prog-id=8 op=UNLOAD Feb 12 21:59:20.912025 systemd-udevd[1369]: Using default interface naming scheme 'v252'. Feb 12 21:59:20.970917 systemd[1]: Started systemd-udevd.service. Feb 12 21:59:20.982348 kernel: audit: type=1130 audit(1707775160.970:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.982494 kernel: audit: type=1334 audit(1707775160.972:140): prog-id=20 op=LOAD Feb 12 21:59:20.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:20.972000 audit: BPF prog-id=20 op=LOAD Feb 12 21:59:20.974493 systemd[1]: Starting systemd-networkd.service... Feb 12 21:59:21.001000 audit: BPF prog-id=21 op=LOAD Feb 12 21:59:21.008031 kernel: audit: type=1334 audit(1707775161.001:141): prog-id=21 op=LOAD Feb 12 21:59:21.008329 kernel: audit: type=1334 audit(1707775161.003:142): prog-id=22 op=LOAD Feb 12 21:59:21.008376 kernel: audit: type=1334 audit(1707775161.004:143): prog-id=23 op=LOAD Feb 12 21:59:21.003000 audit: BPF prog-id=22 op=LOAD Feb 12 21:59:21.004000 audit: BPF prog-id=23 op=LOAD Feb 12 21:59:21.007271 systemd[1]: Starting systemd-userdbd.service... Feb 12 21:59:21.076125 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 21:59:21.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.100064 systemd[1]: Started systemd-userdbd.service. Feb 12 21:59:21.140373 (udev-worker)[1376]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:59:21.197610 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 21:59:21.208088 kernel: ACPI: button: Power Button [PWRF] Feb 12 21:59:21.208181 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 12 21:59:21.211460 kernel: ACPI: button: Sleep Button [SLPF] Feb 12 21:59:21.234966 systemd-networkd[1375]: lo: Link UP Feb 12 21:59:21.234977 systemd-networkd[1375]: lo: Gained carrier Feb 12 21:59:21.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.236337 systemd-networkd[1375]: Enumeration completed Feb 12 21:59:21.236497 systemd[1]: Started systemd-networkd.service. Feb 12 21:59:21.239816 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 21:59:21.242596 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 21:59:21.248446 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:59:21.248833 systemd-networkd[1375]: eth0: Link UP Feb 12 21:59:21.249115 systemd-networkd[1375]: eth0: Gained carrier Feb 12 21:59:21.261582 systemd-networkd[1375]: eth0: DHCPv4 address 172.31.16.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 21:59:21.280000 audit[1371]: AVC avc: denied { confidentiality } for pid=1371 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:59:21.280000 audit[1371]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557c09647870 a1=32194 a2=7f11bf706bc5 a3=5 items=108 ppid=1369 pid=1371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:21.280000 audit: CWD cwd="/" Feb 12 21:59:21.280000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=1 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=2 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=3 name=(null) inode=15454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=4 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=5 name=(null) inode=15455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=6 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=7 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=8 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=9 name=(null) inode=15457 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=10 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=11 name=(null) inode=15458 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=12 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=13 name=(null) inode=15459 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=14 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=15 name=(null) inode=15460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=16 name=(null) inode=15456 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=17 name=(null) inode=15461 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=18 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=19 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=20 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=21 name=(null) inode=15463 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=22 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=23 name=(null) inode=15464 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.314504 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1384) Feb 12 21:59:21.280000 audit: PATH item=24 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=25 name=(null) inode=15465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=26 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=27 name=(null) inode=15466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=28 name=(null) inode=15462 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=29 name=(null) inode=15467 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=30 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=31 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=32 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=33 name=(null) inode=15471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=34 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=35 name=(null) inode=15472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=36 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=37 name=(null) inode=15473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=38 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=39 name=(null) inode=15474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=40 name=(null) inode=15470 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=41 name=(null) inode=15475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=42 name=(null) inode=15453 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=43 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=44 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=45 name=(null) inode=15477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=46 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=47 name=(null) inode=15478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=48 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=49 name=(null) inode=15479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=50 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=51 name=(null) inode=15480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=52 name=(null) inode=15476 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=53 name=(null) inode=15481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=55 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=56 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=57 name=(null) inode=15483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=58 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=59 name=(null) inode=15484 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=60 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=61 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=62 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=63 name=(null) inode=15486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=64 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=65 name=(null) inode=15487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=66 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=67 name=(null) inode=15488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=68 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=69 name=(null) inode=15489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=70 name=(null) inode=15485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=71 name=(null) inode=15490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=72 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=73 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=74 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=75 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=76 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=77 name=(null) inode=15493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=78 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=79 name=(null) inode=15494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=80 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=81 name=(null) inode=15495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=82 name=(null) inode=15491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=83 name=(null) inode=15496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=84 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=85 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=86 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=87 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=88 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=89 name=(null) inode=15499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=90 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=91 name=(null) inode=15500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=92 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=93 name=(null) inode=15501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=94 name=(null) inode=15497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=95 name=(null) inode=15502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=96 name=(null) inode=15482 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=97 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=98 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=99 name=(null) inode=15504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=100 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=101 name=(null) inode=15505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=102 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=103 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=104 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=105 name=(null) inode=15507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=106 name=(null) inode=15503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PATH item=107 name=(null) inode=15508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:59:21.280000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 21:59:21.358453 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 12 21:59:21.390491 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 12 21:59:21.439529 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 21:59:21.505421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 21:59:21.614868 systemd[1]: Finished systemd-udev-settle.service. Feb 12 21:59:21.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.618513 systemd[1]: Starting lvm2-activation-early.service... Feb 12 21:59:21.679625 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:59:21.708123 systemd[1]: Finished lvm2-activation-early.service. Feb 12 21:59:21.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.709417 systemd[1]: Reached target cryptsetup.target. Feb 12 21:59:21.712261 systemd[1]: Starting lvm2-activation.service... Feb 12 21:59:21.718881 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:59:21.746496 systemd[1]: Finished lvm2-activation.service. Feb 12 21:59:21.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.748454 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:59:21.749659 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 21:59:21.749705 systemd[1]: Reached target local-fs.target. Feb 12 21:59:21.750664 systemd[1]: Reached target machines.target. Feb 12 21:59:21.753392 systemd[1]: Starting ldconfig.service... Feb 12 21:59:21.754944 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 21:59:21.755030 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:59:21.756321 systemd[1]: Starting systemd-boot-update.service... Feb 12 21:59:21.759366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 21:59:21.762303 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 21:59:21.763919 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:59:21.763994 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:59:21.765247 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 21:59:21.771668 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1486 (bootctl) Feb 12 21:59:21.772904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 21:59:21.792052 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 21:59:21.794105 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 21:59:21.796270 systemd-tmpfiles[1489]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 21:59:21.811779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 21:59:21.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.888246 systemd-fsck[1494]: fsck.fat 4.2 (2021-01-31) Feb 12 21:59:21.888246 systemd-fsck[1494]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 12 21:59:21.891916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 21:59:21.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:21.894984 systemd[1]: Mounting boot.mount... Feb 12 21:59:21.920501 systemd[1]: Mounted boot.mount. Feb 12 21:59:21.943782 systemd[1]: Finished systemd-boot-update.service. Feb 12 21:59:21.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:22.088268 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 21:59:22.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:22.091304 systemd[1]: Starting audit-rules.service... Feb 12 21:59:22.094094 systemd[1]: Starting clean-ca-certificates.service... Feb 12 21:59:22.102909 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 21:59:22.115000 audit: BPF prog-id=24 op=LOAD Feb 12 21:59:22.122000 audit: BPF prog-id=25 op=LOAD Feb 12 21:59:22.121683 systemd[1]: Starting systemd-resolved.service... Feb 12 21:59:22.125718 systemd[1]: Starting systemd-timesyncd.service... Feb 12 21:59:22.128388 systemd[1]: Starting systemd-update-utmp.service... Feb 12 21:59:22.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:22.135036 systemd[1]: Finished clean-ca-certificates.service. Feb 12 21:59:22.136609 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 21:59:22.137000 audit[1514]: SYSTEM_BOOT pid=1514 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 21:59:22.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:22.144997 systemd[1]: Finished systemd-update-utmp.service. Feb 12 21:59:22.231929 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 21:59:22.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:59:22.289741 augenrules[1528]: No rules Feb 12 21:59:22.288000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 21:59:22.288000 audit[1528]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd5b5a590 a2=420 a3=0 items=0 ppid=1508 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:59:22.288000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 21:59:22.292895 systemd[1]: Finished audit-rules.service. Feb 12 21:59:22.309078 systemd-resolved[1512]: Positive Trust Anchors: Feb 12 21:59:22.309098 systemd-resolved[1512]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:59:22.309141 systemd-resolved[1512]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:59:22.320072 systemd[1]: Started systemd-timesyncd.service. Feb 12 21:59:22.321876 systemd[1]: Reached target time-set.target. Feb 12 21:59:22.336658 systemd-resolved[1512]: Defaulting to hostname 'linux'. Feb 12 21:59:22.338646 systemd[1]: Started systemd-resolved.service. Feb 12 21:59:22.340311 systemd[1]: Reached target network.target. Feb 12 21:59:22.342418 systemd[1]: Reached target nss-lookup.target. Feb 12 21:59:22.413719 systemd-networkd[1375]: eth0: Gained IPv6LL Feb 12 21:59:22.416774 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 21:59:22.418153 systemd[1]: Reached target network-online.target. Feb 12 21:59:22.459700 systemd-timesyncd[1513]: Contacted time server 73.193.62.54:123 (0.flatcar.pool.ntp.org). Feb 12 21:59:22.459880 systemd-timesyncd[1513]: Initial clock synchronization to Mon 2024-02-12 21:59:22.647569 UTC. Feb 12 21:59:22.686374 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 21:59:22.687207 ldconfig[1485]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 21:59:22.688627 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 21:59:22.699935 systemd[1]: Finished ldconfig.service. Feb 12 21:59:22.703308 systemd[1]: Starting systemd-update-done.service... Feb 12 21:59:22.720034 systemd[1]: Finished systemd-update-done.service. Feb 12 21:59:22.721468 systemd[1]: Reached target sysinit.target. Feb 12 21:59:22.725748 systemd[1]: Started motdgen.path. Feb 12 21:59:22.726722 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 21:59:22.728862 systemd[1]: Started logrotate.timer. Feb 12 21:59:22.730020 systemd[1]: Started mdadm.timer. Feb 12 21:59:22.731241 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 21:59:22.732822 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 21:59:22.732851 systemd[1]: Reached target paths.target. Feb 12 21:59:22.733962 systemd[1]: Reached target timers.target. Feb 12 21:59:22.735450 systemd[1]: Listening on dbus.socket. Feb 12 21:59:22.739180 systemd[1]: Starting docker.socket... Feb 12 21:59:22.745596 systemd[1]: Listening on sshd.socket. Feb 12 21:59:22.747698 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:59:22.753116 systemd[1]: Listening on docker.socket. Feb 12 21:59:22.754830 systemd[1]: Reached target sockets.target. Feb 12 21:59:22.755853 systemd[1]: Reached target basic.target. Feb 12 21:59:22.757169 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:59:22.757200 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:59:22.759604 systemd[1]: Started amazon-ssm-agent.service. Feb 12 21:59:22.763766 systemd[1]: Starting containerd.service... Feb 12 21:59:22.775992 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 21:59:22.787472 systemd[1]: Starting dbus.service... Feb 12 21:59:22.793835 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 21:59:22.796337 systemd[1]: Starting extend-filesystems.service... Feb 12 21:59:22.797371 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 21:59:22.799326 systemd[1]: Starting motdgen.service... Feb 12 21:59:22.802758 systemd[1]: Started nvidia.service. Feb 12 21:59:22.821392 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 21:59:22.824911 systemd[1]: Starting prepare-critools.service... Feb 12 21:59:22.829964 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 21:59:22.854312 systemd[1]: Starting sshd-keygen.service... Feb 12 21:59:22.861555 systemd[1]: Starting systemd-logind.service... Feb 12 21:59:22.863190 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:59:22.863274 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 21:59:22.864236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 21:59:22.866313 systemd[1]: Starting update-engine.service... Feb 12 21:59:22.880580 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 21:59:22.922953 jq[1553]: true Feb 12 21:59:22.973929 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 21:59:22.990804 jq[1543]: false Feb 12 21:59:22.974554 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 21:59:22.980578 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 21:59:22.980785 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 21:59:23.031483 tar[1555]: ./ Feb 12 21:59:23.031483 tar[1555]: ./loopback Feb 12 21:59:23.044768 tar[1556]: crictl Feb 12 21:59:23.081767 jq[1563]: true Feb 12 21:59:23.146586 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 21:59:23.146931 systemd[1]: Finished motdgen.service. Feb 12 21:59:23.147831 dbus-daemon[1542]: [system] SELinux support is enabled Feb 12 21:59:23.148689 extend-filesystems[1544]: Found nvme0n1 Feb 12 21:59:23.150083 extend-filesystems[1544]: Found nvme0n1p1 Feb 12 21:59:23.150581 systemd[1]: Started dbus.service. Feb 12 21:59:23.166547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 21:59:23.166594 systemd[1]: Reached target system-config.target. Feb 12 21:59:23.171886 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 21:59:23.171911 systemd[1]: Reached target user-config.target. Feb 12 21:59:23.192984 dbus-daemon[1542]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1375 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 21:59:23.194074 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 21:59:23.200535 systemd[1]: Starting systemd-hostnamed.service... Feb 12 21:59:23.204265 extend-filesystems[1544]: Found nvme0n1p2 Feb 12 21:59:23.205412 extend-filesystems[1544]: Found nvme0n1p3 Feb 12 21:59:23.205412 extend-filesystems[1544]: Found usr Feb 12 21:59:23.205412 extend-filesystems[1544]: Found nvme0n1p4 Feb 12 21:59:23.205412 extend-filesystems[1544]: Found nvme0n1p6 Feb 12 21:59:23.205412 extend-filesystems[1544]: Found nvme0n1p7 Feb 12 21:59:23.205412 extend-filesystems[1544]: Found nvme0n1p9 Feb 12 21:59:23.205412 extend-filesystems[1544]: Checking size of /dev/nvme0n1p9 Feb 12 21:59:23.264814 extend-filesystems[1544]: Resized partition /dev/nvme0n1p9 Feb 12 21:59:23.291570 extend-filesystems[1603]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 21:59:23.300458 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 21:59:23.368046 update_engine[1552]: I0212 21:59:23.367296 1552 main.cc:92] Flatcar Update Engine starting Feb 12 21:59:23.379716 amazon-ssm-agent[1539]: 2024/02/12 21:59:23 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 21:59:23.382781 systemd[1]: Started update-engine.service. Feb 12 21:59:23.384061 update_engine[1552]: I0212 21:59:23.383059 1552 update_check_scheduler.cc:74] Next update check in 8m42s Feb 12 21:59:23.386415 systemd[1]: Started locksmithd.service. Feb 12 21:59:23.393645 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 21:59:23.393904 amazon-ssm-agent[1539]: Initializing new seelog logger Feb 12 21:59:23.393904 amazon-ssm-agent[1539]: New Seelog Logger Creation Complete Feb 12 21:59:23.393904 amazon-ssm-agent[1539]: 2024/02/12 21:59:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 21:59:23.393904 amazon-ssm-agent[1539]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 21:59:23.393904 amazon-ssm-agent[1539]: 2024/02/12 21:59:23 processing appconfig overrides Feb 12 21:59:23.424916 extend-filesystems[1603]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 21:59:23.424916 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 21:59:23.424916 extend-filesystems[1603]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 21:59:23.445357 extend-filesystems[1544]: Resized filesystem in /dev/nvme0n1p9 Feb 12 21:59:23.460403 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:59:23.427286 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 21:59:23.460859 env[1564]: time="2024-02-12T21:59:23.438732197Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 21:59:23.427567 systemd[1]: Finished extend-filesystems.service. Feb 12 21:59:23.448838 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 21:59:23.500347 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 21:59:23.522052 tar[1555]: ./bandwidth Feb 12 21:59:23.540859 systemd-logind[1551]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 21:59:23.541525 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 12 21:59:23.541665 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 21:59:23.545569 systemd-logind[1551]: New seat seat0. Feb 12 21:59:23.550325 systemd[1]: Started systemd-logind.service. Feb 12 21:59:23.641192 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 21:59:23.642299 dbus-daemon[1542]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1588 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 21:59:23.641373 systemd[1]: Started systemd-hostnamed.service. Feb 12 21:59:23.647619 systemd[1]: Starting polkit.service... Feb 12 21:59:23.677862 polkitd[1636]: Started polkitd version 121 Feb 12 21:59:23.678809 env[1564]: time="2024-02-12T21:59:23.678760898Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 21:59:23.679506 env[1564]: time="2024-02-12T21:59:23.679473647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:59:23.685368 env[1564]: time="2024-02-12T21:59:23.685314912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:59:23.685705 env[1564]: time="2024-02-12T21:59:23.685677874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:59:23.687040 env[1564]: time="2024-02-12T21:59:23.686056153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:59:23.688578 env[1564]: time="2024-02-12T21:59:23.688518507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 21:59:23.689103 env[1564]: time="2024-02-12T21:59:23.689077594Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 21:59:23.689194 env[1564]: time="2024-02-12T21:59:23.689178573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 21:59:23.689533 env[1564]: time="2024-02-12T21:59:23.689509079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:59:23.692260 env[1564]: time="2024-02-12T21:59:23.692192324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:59:23.692737 env[1564]: time="2024-02-12T21:59:23.692705546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:59:23.701867 env[1564]: time="2024-02-12T21:59:23.701819804Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 21:59:23.702152 env[1564]: time="2024-02-12T21:59:23.702126739Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 21:59:23.702262 env[1564]: time="2024-02-12T21:59:23.702244995Z" level=info msg="metadata content store policy set" policy=shared Feb 12 21:59:23.704963 polkitd[1636]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 21:59:23.705408 polkitd[1636]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709145481Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709207859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709228718Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709290010Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709311866Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709397865Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709420843Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709454050Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709489108Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709509981Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709529434Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709550256Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709707738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 21:59:23.711485 env[1564]: time="2024-02-12T21:59:23.709815266Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710290130Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710330133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710352668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710420033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710518045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710546596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710565406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710584739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710605027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710621857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710640028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710662924Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710814714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710835332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.712962 env[1564]: time="2024-02-12T21:59:23.710857392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.713774 env[1564]: time="2024-02-12T21:59:23.710875742Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 21:59:23.713774 env[1564]: time="2024-02-12T21:59:23.710897375Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 21:59:23.713774 env[1564]: time="2024-02-12T21:59:23.710914793Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 21:59:23.713774 env[1564]: time="2024-02-12T21:59:23.710942749Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 21:59:23.713774 env[1564]: time="2024-02-12T21:59:23.710987767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 21:59:23.714037 env[1564]: time="2024-02-12T21:59:23.711278924Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 21:59:23.714037 env[1564]: time="2024-02-12T21:59:23.711362284Z" level=info msg="Connect containerd service" Feb 12 21:59:23.714037 env[1564]: time="2024-02-12T21:59:23.711409765Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 21:59:23.717581 polkitd[1636]: Finished loading, compiling and executing 2 rules Feb 12 21:59:23.720684 env[1564]: time="2024-02-12T21:59:23.718101449Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:59:23.720684 env[1564]: time="2024-02-12T21:59:23.718485312Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 21:59:23.720684 env[1564]: time="2024-02-12T21:59:23.718538403Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 21:59:23.718707 systemd[1]: Started containerd.service. Feb 12 21:59:23.722837 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 21:59:23.723032 systemd[1]: Started polkit.service. Feb 12 21:59:23.726173 env[1564]: time="2024-02-12T21:59:23.725887912Z" level=info msg="containerd successfully booted in 0.390330s" Feb 12 21:59:23.727750 polkitd[1636]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 21:59:23.732165 env[1564]: time="2024-02-12T21:59:23.732103236Z" level=info msg="Start subscribing containerd event" Feb 12 21:59:23.757763 tar[1555]: ./ptp Feb 12 21:59:23.757941 env[1564]: time="2024-02-12T21:59:23.757899911Z" level=info msg="Start recovering state" Feb 12 21:59:23.758073 env[1564]: time="2024-02-12T21:59:23.758035918Z" level=info msg="Start event monitor" Feb 12 21:59:23.758129 env[1564]: time="2024-02-12T21:59:23.758069628Z" level=info msg="Start snapshots syncer" Feb 12 21:59:23.758129 env[1564]: time="2024-02-12T21:59:23.758087759Z" level=info msg="Start cni network conf syncer for default" Feb 12 21:59:23.758129 env[1564]: time="2024-02-12T21:59:23.758106219Z" level=info msg="Start streaming server" Feb 12 21:59:23.768820 systemd-resolved[1512]: System hostname changed to 'ip-172-31-16-81'. Feb 12 21:59:23.768821 systemd-hostnamed[1588]: Hostname set to (transient) Feb 12 21:59:23.872099 tar[1555]: ./vlan Feb 12 21:59:23.980158 coreos-metadata[1541]: Feb 12 21:59:23.979 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 21:59:23.982659 coreos-metadata[1541]: Feb 12 21:59:23.982 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 21:59:23.983298 coreos-metadata[1541]: Feb 12 21:59:23.983 INFO Fetch successful Feb 12 21:59:23.983298 coreos-metadata[1541]: Feb 12 21:59:23.983 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 21:59:23.984153 coreos-metadata[1541]: Feb 12 21:59:23.983 INFO Fetch successful Feb 12 21:59:23.993640 unknown[1541]: wrote ssh authorized keys file for user: core Feb 12 21:59:24.031954 update-ssh-keys[1705]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:59:24.032575 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 21:59:24.059754 tar[1555]: ./host-device Feb 12 21:59:24.185722 tar[1555]: ./tuning Feb 12 21:59:24.196910 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Create new startup processor Feb 12 21:59:24.197220 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 21:59:24.197312 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing bookkeeping folders Feb 12 21:59:24.197373 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO removing the completed state files Feb 12 21:59:24.197439 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing bookkeeping folders for long running plugins Feb 12 21:59:24.197523 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 21:59:24.197584 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing healthcheck folders for long running plugins Feb 12 21:59:24.197652 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing locations for inventory plugin Feb 12 21:59:24.197710 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing default location for custom inventory Feb 12 21:59:24.197874 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing default location for file inventory Feb 12 21:59:24.197945 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Initializing default location for role inventory Feb 12 21:59:24.198016 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Init the cloudwatchlogs publisher Feb 12 21:59:24.198075 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 21:59:24.198135 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 21:59:24.198200 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:runDocument Feb 12 21:59:24.198261 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 21:59:24.198321 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 21:59:24.198385 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:configurePackage Feb 12 21:59:24.198472 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:downloadContent Feb 12 21:59:24.198537 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:configureDocker Feb 12 21:59:24.198692 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 21:59:24.198692 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 21:59:24.198692 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 21:59:24.198692 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO OS: linux, Arch: amd64 Feb 12 21:59:24.199710 amazon-ssm-agent[1539]: datastore file /var/lib/amazon/ssm/i-007ee0477818448f3/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 21:59:24.202651 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 21:59:24.298397 tar[1555]: ./vrf Feb 12 21:59:24.298560 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 21:59:24.349046 tar[1555]: ./sbr Feb 12 21:59:24.394877 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 21:59:24.403980 tar[1555]: ./tap Feb 12 21:59:24.463596 tar[1555]: ./dhcp Feb 12 21:59:24.489640 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] Starting message polling Feb 12 21:59:24.584326 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 21:59:24.638051 tar[1555]: ./static Feb 12 21:59:24.679303 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [instanceID=i-007ee0477818448f3] Starting association polling Feb 12 21:59:24.685480 tar[1555]: ./firewall Feb 12 21:59:24.770129 tar[1555]: ./macvlan Feb 12 21:59:24.774724 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 21:59:24.870016 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 21:59:24.895199 tar[1555]: ./dummy Feb 12 21:59:24.966385 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 21:59:25.023322 tar[1555]: ./bridge Feb 12 21:59:25.062028 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 21:59:25.087660 systemd[1]: Finished prepare-critools.service. Feb 12 21:59:25.114184 tar[1555]: ./ipvlan Feb 12 21:59:25.157940 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 21:59:25.170229 tar[1555]: ./portmap Feb 12 21:59:25.223416 tar[1555]: ./host-local Feb 12 21:59:25.254013 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [OfflineService] Starting document processing engine... Feb 12 21:59:25.288630 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 21:59:25.350866 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [OfflineService] [EngineProcessor] Starting Feb 12 21:59:25.360745 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 21:59:25.448379 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 21:59:25.545516 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [OfflineService] Starting message polling Feb 12 21:59:25.642357 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [OfflineService] Starting send replies to MDS Feb 12 21:59:25.739958 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 21:59:25.837551 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 21:59:25.935085 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 21:59:26.032698 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 21:59:26.130711 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 21:59:26.228944 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 21:59:26.328002 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-007ee0477818448f3, requestId: e99f5de7-6ca4-4113-aef3-fefb5b2d97c5 Feb 12 21:59:26.426828 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 21:59:26.525513 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] listening reply. Feb 12 21:59:26.529418 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 21:59:26.554465 systemd[1]: Finished sshd-keygen.service. Feb 12 21:59:26.557950 systemd[1]: Starting issuegen.service... Feb 12 21:59:26.564771 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 21:59:26.565009 systemd[1]: Finished issuegen.service. Feb 12 21:59:26.568290 systemd[1]: Starting systemd-user-sessions.service... Feb 12 21:59:26.576478 systemd[1]: Finished systemd-user-sessions.service. Feb 12 21:59:26.579623 systemd[1]: Started getty@tty1.service. Feb 12 21:59:26.583083 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 21:59:26.584530 systemd[1]: Reached target getty.target. Feb 12 21:59:26.585933 systemd[1]: Reached target multi-user.target. Feb 12 21:59:26.589352 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 21:59:26.603838 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 21:59:26.604296 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 21:59:26.605789 systemd[1]: Startup finished in 830ms (kernel) + 8.905s (initrd) + 11.809s (userspace) = 21.545s. Feb 12 21:59:26.624378 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [StartupProcessor] Executing startup processor tasks Feb 12 21:59:26.723517 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 21:59:26.822560 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 21:59:26.924655 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 21:59:27.024301 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-007ee0477818448f3?role=subscribe&stream=input Feb 12 21:59:27.124370 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-007ee0477818448f3?role=subscribe&stream=input Feb 12 21:59:27.224643 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 21:59:27.325645 amazon-ssm-agent[1539]: 2024-02-12 21:59:24 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 21:59:32.775049 systemd[1]: Created slice system-sshd.slice. Feb 12 21:59:32.777382 systemd[1]: Started sshd@0-172.31.16.81:22-139.178.89.65:42612.service. Feb 12 21:59:32.961662 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 42612 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:59:32.964237 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:59:32.984986 systemd[1]: Created slice user-500.slice. Feb 12 21:59:32.987208 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 21:59:32.996563 systemd-logind[1551]: New session 1 of user core. Feb 12 21:59:33.011328 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 21:59:33.017199 systemd[1]: Starting user@500.service... Feb 12 21:59:33.031195 (systemd)[1754]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:59:33.166395 systemd[1754]: Queued start job for default target default.target. Feb 12 21:59:33.167121 systemd[1754]: Reached target paths.target. Feb 12 21:59:33.167157 systemd[1754]: Reached target sockets.target. Feb 12 21:59:33.167176 systemd[1754]: Reached target timers.target. Feb 12 21:59:33.167193 systemd[1754]: Reached target basic.target. Feb 12 21:59:33.167315 systemd[1]: Started user@500.service. Feb 12 21:59:33.168822 systemd[1]: Started session-1.scope. Feb 12 21:59:33.169865 systemd[1754]: Reached target default.target. Feb 12 21:59:33.170073 systemd[1754]: Startup finished in 127ms. Feb 12 21:59:33.316650 systemd[1]: Started sshd@1-172.31.16.81:22-139.178.89.65:42616.service. Feb 12 21:59:33.485551 sshd[1763]: Accepted publickey for core from 139.178.89.65 port 42616 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:59:33.487730 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:59:33.495492 systemd-logind[1551]: New session 2 of user core. Feb 12 21:59:33.496370 systemd[1]: Started session-2.scope. Feb 12 21:59:33.628125 sshd[1763]: pam_unix(sshd:session): session closed for user core Feb 12 21:59:33.631197 systemd[1]: sshd@1-172.31.16.81:22-139.178.89.65:42616.service: Deactivated successfully. Feb 12 21:59:33.632137 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 21:59:33.632915 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Feb 12 21:59:33.633878 systemd-logind[1551]: Removed session 2. Feb 12 21:59:33.660233 systemd[1]: Started sshd@2-172.31.16.81:22-139.178.89.65:42624.service. Feb 12 21:59:33.828178 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 42624 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:59:33.830532 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:59:33.836293 systemd-logind[1551]: New session 3 of user core. Feb 12 21:59:33.837053 systemd[1]: Started session-3.scope. Feb 12 21:59:33.960391 sshd[1769]: pam_unix(sshd:session): session closed for user core Feb 12 21:59:33.964810 systemd[1]: sshd@2-172.31.16.81:22-139.178.89.65:42624.service: Deactivated successfully. Feb 12 21:59:33.965935 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 21:59:33.966764 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Feb 12 21:59:33.968012 systemd-logind[1551]: Removed session 3. Feb 12 21:59:33.985876 systemd[1]: Started sshd@3-172.31.16.81:22-139.178.89.65:42630.service. Feb 12 21:59:34.153899 sshd[1775]: Accepted publickey for core from 139.178.89.65 port 42630 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:59:34.155753 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:59:34.162678 systemd[1]: Started session-4.scope. Feb 12 21:59:34.163658 systemd-logind[1551]: New session 4 of user core. Feb 12 21:59:34.294980 sshd[1775]: pam_unix(sshd:session): session closed for user core Feb 12 21:59:34.299571 systemd[1]: sshd@3-172.31.16.81:22-139.178.89.65:42630.service: Deactivated successfully. Feb 12 21:59:34.300549 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 21:59:34.301342 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Feb 12 21:59:34.302225 systemd-logind[1551]: Removed session 4. Feb 12 21:59:34.323383 systemd[1]: Started sshd@4-172.31.16.81:22-139.178.89.65:42634.service. Feb 12 21:59:34.494030 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 42634 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:59:34.495693 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:59:34.506334 systemd-logind[1551]: New session 5 of user core. Feb 12 21:59:34.507038 systemd[1]: Started session-5.scope. Feb 12 21:59:34.641264 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 21:59:34.642087 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 21:59:35.300855 systemd[1]: Reloading. Feb 12 21:59:35.423185 /usr/lib/systemd/system-generators/torcx-generator[1814]: time="2024-02-12T21:59:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:59:35.425879 /usr/lib/systemd/system-generators/torcx-generator[1814]: time="2024-02-12T21:59:35Z" level=info msg="torcx already run" Feb 12 21:59:35.547976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:59:35.548000 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:59:35.571842 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:59:35.681965 systemd[1]: Started kubelet.service. Feb 12 21:59:35.699851 systemd[1]: Starting coreos-metadata.service... Feb 12 21:59:35.790656 kubelet[1865]: E0212 21:59:35.790589 1865 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 21:59:35.792893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:59:35.793083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:59:35.840674 coreos-metadata[1873]: Feb 12 21:59:35.840 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 21:59:35.842079 coreos-metadata[1873]: Feb 12 21:59:35.842 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 12 21:59:35.842842 coreos-metadata[1873]: Feb 12 21:59:35.842 INFO Fetch successful Feb 12 21:59:35.842942 coreos-metadata[1873]: Feb 12 21:59:35.842 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 12 21:59:35.843336 coreos-metadata[1873]: Feb 12 21:59:35.843 INFO Fetch successful Feb 12 21:59:35.843450 coreos-metadata[1873]: Feb 12 21:59:35.843 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 12 21:59:35.844332 coreos-metadata[1873]: Feb 12 21:59:35.844 INFO Fetch successful Feb 12 21:59:35.844406 coreos-metadata[1873]: Feb 12 21:59:35.844 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 12 21:59:35.844833 coreos-metadata[1873]: Feb 12 21:59:35.844 INFO Fetch successful Feb 12 21:59:35.844899 coreos-metadata[1873]: Feb 12 21:59:35.844 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 12 21:59:35.845340 coreos-metadata[1873]: Feb 12 21:59:35.845 INFO Fetch successful Feb 12 21:59:35.845401 coreos-metadata[1873]: Feb 12 21:59:35.845 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 12 21:59:35.845984 coreos-metadata[1873]: Feb 12 21:59:35.845 INFO Fetch successful Feb 12 21:59:35.846571 coreos-metadata[1873]: Feb 12 21:59:35.846 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 12 21:59:35.847155 coreos-metadata[1873]: Feb 12 21:59:35.847 INFO Fetch successful Feb 12 21:59:35.847279 coreos-metadata[1873]: Feb 12 21:59:35.847 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 12 21:59:35.847775 coreos-metadata[1873]: Feb 12 21:59:35.847 INFO Fetch successful Feb 12 21:59:35.857692 systemd[1]: Finished coreos-metadata.service. Feb 12 21:59:36.244665 systemd[1]: Stopped kubelet.service. Feb 12 21:59:36.267232 systemd[1]: Reloading. Feb 12 21:59:36.374929 /usr/lib/systemd/system-generators/torcx-generator[1928]: time="2024-02-12T21:59:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:59:36.374969 /usr/lib/systemd/system-generators/torcx-generator[1928]: time="2024-02-12T21:59:36Z" level=info msg="torcx already run" Feb 12 21:59:36.514625 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:59:36.514650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:59:36.540406 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:59:36.692249 systemd[1]: Started kubelet.service. Feb 12 21:59:36.762008 kubelet[1981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:59:36.762008 kubelet[1981]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 21:59:36.762008 kubelet[1981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:59:36.762581 kubelet[1981]: I0212 21:59:36.762057 1981 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:59:37.059208 kubelet[1981]: I0212 21:59:37.059170 1981 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 21:59:37.059208 kubelet[1981]: I0212 21:59:37.059200 1981 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:59:37.059734 kubelet[1981]: I0212 21:59:37.059680 1981 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 21:59:37.062734 kubelet[1981]: I0212 21:59:37.062711 1981 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:59:37.065317 kubelet[1981]: I0212 21:59:37.065295 1981 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:59:37.065595 kubelet[1981]: I0212 21:59:37.065578 1981 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:59:37.065688 kubelet[1981]: I0212 21:59:37.065673 1981 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:59:37.065909 kubelet[1981]: I0212 21:59:37.065699 1981 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:59:37.065909 kubelet[1981]: I0212 21:59:37.065714 1981 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 21:59:37.065909 kubelet[1981]: I0212 21:59:37.065908 1981 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:59:37.070516 kubelet[1981]: I0212 21:59:37.070491 1981 kubelet.go:405] "Attempting to sync node with API server" Feb 12 21:59:37.070516 kubelet[1981]: I0212 21:59:37.070517 1981 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:59:37.070820 kubelet[1981]: I0212 21:59:37.070542 1981 kubelet.go:309] "Adding apiserver pod source" Feb 12 21:59:37.070820 kubelet[1981]: I0212 21:59:37.070642 1981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:59:37.071157 kubelet[1981]: E0212 21:59:37.071131 1981 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:37.071487 kubelet[1981]: E0212 21:59:37.071472 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:37.072108 kubelet[1981]: I0212 21:59:37.072094 1981 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:59:37.072593 kubelet[1981]: W0212 21:59:37.072577 1981 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 21:59:37.073488 kubelet[1981]: I0212 21:59:37.073474 1981 server.go:1168] "Started kubelet" Feb 12 21:59:37.078420 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 21:59:37.078539 kubelet[1981]: E0212 21:59:37.076410 1981 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:59:37.078539 kubelet[1981]: E0212 21:59:37.076458 1981 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:59:37.078539 kubelet[1981]: I0212 21:59:37.077379 1981 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 21:59:37.078539 kubelet[1981]: I0212 21:59:37.077857 1981 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:59:37.078989 kubelet[1981]: I0212 21:59:37.078975 1981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:59:37.079221 kubelet[1981]: I0212 21:59:37.079198 1981 server.go:461] "Adding debug handlers to kubelet server" Feb 12 21:59:37.091632 kubelet[1981]: I0212 21:59:37.091526 1981 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 21:59:37.092528 kubelet[1981]: I0212 21:59:37.092504 1981 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 21:59:37.154879 kubelet[1981]: E0212 21:59:37.154715 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b1b1589e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 73449118, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 73449118, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.155871 kubelet[1981]: W0212 21:59:37.155701 1981 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:59:37.156060 kubelet[1981]: E0212 21:59:37.156042 1981 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:59:37.156242 kubelet[1981]: W0212 21:59:37.156230 1981 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:59:37.156335 kubelet[1981]: E0212 21:59:37.156326 1981 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:59:37.156724 kubelet[1981]: E0212 21:59:37.156707 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.81\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 12 21:59:37.156926 kubelet[1981]: W0212 21:59:37.156913 1981 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:59:37.157231 kubelet[1981]: E0212 21:59:37.157144 1981 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:59:37.158674 kubelet[1981]: E0212 21:59:37.158539 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b1df0de5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 76444645, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 76444645, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.162972 kubelet[1981]: I0212 21:59:37.161635 1981 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:59:37.162972 kubelet[1981]: I0212 21:59:37.161688 1981 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:59:37.162972 kubelet[1981]: I0212 21:59:37.161707 1981 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:59:37.162972 kubelet[1981]: E0212 21:59:37.162809 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d35e0e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159564814, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159564814, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.165479 kubelet[1981]: E0212 21:59:37.164276 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d37dba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159572922, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159572922, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.170404 kubelet[1981]: E0212 21:59:37.165818 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d394c6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159578822, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159578822, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.171683 kubelet[1981]: I0212 21:59:37.171609 1981 policy_none.go:49] "None policy: Start" Feb 12 21:59:37.184749 kubelet[1981]: I0212 21:59:37.184719 1981 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:59:37.185160 kubelet[1981]: I0212 21:59:37.185138 1981 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:59:37.195985 systemd[1]: Created slice kubepods.slice. Feb 12 21:59:37.196888 kubelet[1981]: I0212 21:59:37.196618 1981 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.81" Feb 12 21:59:37.198895 kubelet[1981]: E0212 21:59:37.198871 1981 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.81" Feb 12 21:59:37.199370 kubelet[1981]: E0212 21:59:37.199295 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d35e0e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159564814, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 196566629, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d35e0e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.201034 kubelet[1981]: E0212 21:59:37.200863 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d37dba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159572922, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 196574443, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d37dba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.202389 kubelet[1981]: E0212 21:59:37.201865 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d394c6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159578822, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 196580601, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d394c6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.205303 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 21:59:37.210315 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 21:59:37.226224 kubelet[1981]: I0212 21:59:37.225993 1981 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:59:37.226546 kubelet[1981]: I0212 21:59:37.226524 1981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:59:37.230505 kubelet[1981]: E0212 21:59:37.230479 1981 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.81\" not found" Feb 12 21:59:37.234626 kubelet[1981]: E0212 21:59:37.234528 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79baf606f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 228945142, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 228945142, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.283134 kubelet[1981]: I0212 21:59:37.283102 1981 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:59:37.284935 kubelet[1981]: I0212 21:59:37.284914 1981 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:59:37.285082 kubelet[1981]: I0212 21:59:37.285070 1981 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 21:59:37.285222 kubelet[1981]: I0212 21:59:37.285210 1981 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 21:59:37.285379 kubelet[1981]: E0212 21:59:37.285359 1981 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 21:59:37.288349 kubelet[1981]: W0212 21:59:37.288325 1981 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:59:37.288545 kubelet[1981]: E0212 21:59:37.288528 1981 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:59:37.359177 kubelet[1981]: E0212 21:59:37.359074 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.81\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 12 21:59:37.400417 kubelet[1981]: I0212 21:59:37.400389 1981 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.81" Feb 12 21:59:37.404904 kubelet[1981]: E0212 21:59:37.404870 1981 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.81" Feb 12 21:59:37.405102 kubelet[1981]: E0212 21:59:37.405022 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d35e0e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159564814, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 400343008, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d35e0e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.406122 kubelet[1981]: E0212 21:59:37.406036 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d37dba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159572922, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 400354218, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d37dba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.407270 kubelet[1981]: E0212 21:59:37.407198 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d394c6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159578822, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 400358840, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d394c6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.761004 kubelet[1981]: E0212 21:59:37.760899 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.81\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 12 21:59:37.806297 kubelet[1981]: I0212 21:59:37.806259 1981 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.81" Feb 12 21:59:37.812477 kubelet[1981]: E0212 21:59:37.812304 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d35e0e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159564814, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 806210367, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d35e0e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.813507 kubelet[1981]: E0212 21:59:37.813447 1981 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.81" Feb 12 21:59:37.813782 kubelet[1981]: E0212 21:59:37.813684 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d37dba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159572922, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 806223717, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d37dba" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:37.817265 kubelet[1981]: E0212 21:59:37.817180 1981 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.81.17b33c79b6d394c6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.81", UID:"172.31.16.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.81"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 159578822, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 59, 37, 806227662, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.16.81.17b33c79b6d394c6" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:59:38.031712 kubelet[1981]: W0212 21:59:38.031601 1981 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:59:38.031712 kubelet[1981]: E0212 21:59:38.031640 1981 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:59:38.052290 kubelet[1981]: W0212 21:59:38.052257 1981 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:59:38.052290 kubelet[1981]: E0212 21:59:38.052298 1981 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:59:38.062459 kubelet[1981]: I0212 21:59:38.062394 1981 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 21:59:38.072827 kubelet[1981]: E0212 21:59:38.072780 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:38.530172 kubelet[1981]: E0212 21:59:38.530058 1981 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.16.81" not found Feb 12 21:59:38.568292 kubelet[1981]: E0212 21:59:38.568259 1981 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.81\" not found" node="172.31.16.81" Feb 12 21:59:38.616113 kubelet[1981]: I0212 21:59:38.616086 1981 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.81" Feb 12 21:59:38.620863 kubelet[1981]: I0212 21:59:38.620836 1981 kubelet_node_status.go:73] "Successfully registered node" node="172.31.16.81" Feb 12 21:59:38.736662 sudo[1784]: pam_unix(sudo:session): session closed for user root Feb 12 21:59:38.750237 kubelet[1981]: I0212 21:59:38.750204 1981 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 21:59:38.751223 env[1564]: time="2024-02-12T21:59:38.751003914Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 21:59:38.751631 kubelet[1981]: I0212 21:59:38.751459 1981 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 21:59:38.760690 sshd[1781]: pam_unix(sshd:session): session closed for user core Feb 12 21:59:38.765051 systemd[1]: sshd@4-172.31.16.81:22-139.178.89.65:42634.service: Deactivated successfully. Feb 12 21:59:38.766320 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 21:59:38.767426 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Feb 12 21:59:38.769081 systemd-logind[1551]: Removed session 5. Feb 12 21:59:39.073608 kubelet[1981]: E0212 21:59:39.073456 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:39.074161 kubelet[1981]: I0212 21:59:39.073466 1981 apiserver.go:52] "Watching apiserver" Feb 12 21:59:39.077594 kubelet[1981]: I0212 21:59:39.077554 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 21:59:39.077910 kubelet[1981]: I0212 21:59:39.077718 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 21:59:39.087196 systemd[1]: Created slice kubepods-besteffort-pod4160b673_4d26_4599_b01b_76d9de7b67ba.slice. Feb 12 21:59:39.100559 kubelet[1981]: I0212 21:59:39.100501 1981 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 21:59:39.102508 kubelet[1981]: I0212 21:59:39.102483 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-cgroup\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.102740 kubelet[1981]: I0212 21:59:39.102666 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-config-path\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.103439 kubelet[1981]: I0212 21:59:39.103410 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvnb\" (UniqueName: \"kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-kube-api-access-gjvnb\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.103631 kubelet[1981]: I0212 21:59:39.103618 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4160b673-4d26-4599-b01b-76d9de7b67ba-kube-proxy\") pod \"kube-proxy-66r2g\" (UID: \"4160b673-4d26-4599-b01b-76d9de7b67ba\") " pod="kube-system/kube-proxy-66r2g" Feb 12 21:59:39.103842 kubelet[1981]: I0212 21:59:39.103805 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-bpf-maps\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.103969 kubelet[1981]: I0212 21:59:39.103959 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hostproc\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.104175 kubelet[1981]: I0212 21:59:39.104162 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cni-path\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.104277 kubelet[1981]: I0212 21:59:39.104266 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-kernel\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.104388 kubelet[1981]: I0212 21:59:39.104378 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4160b673-4d26-4599-b01b-76d9de7b67ba-xtables-lock\") pod \"kube-proxy-66r2g\" (UID: \"4160b673-4d26-4599-b01b-76d9de7b67ba\") " pod="kube-system/kube-proxy-66r2g" Feb 12 21:59:39.104507 kubelet[1981]: I0212 21:59:39.104498 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-lib-modules\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.104598 kubelet[1981]: I0212 21:59:39.104590 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-xtables-lock\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.104692 kubelet[1981]: I0212 21:59:39.104684 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-clustermesh-secrets\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.105567 kubelet[1981]: I0212 21:59:39.105549 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-run\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.105768 kubelet[1981]: I0212 21:59:39.105752 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-etc-cni-netd\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.105903 kubelet[1981]: I0212 21:59:39.105892 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-net\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.106095 kubelet[1981]: I0212 21:59:39.106076 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hubble-tls\") pod \"cilium-s9s9d\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " pod="kube-system/cilium-s9s9d" Feb 12 21:59:39.106445 kubelet[1981]: I0212 21:59:39.106269 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4160b673-4d26-4599-b01b-76d9de7b67ba-lib-modules\") pod \"kube-proxy-66r2g\" (UID: \"4160b673-4d26-4599-b01b-76d9de7b67ba\") " pod="kube-system/kube-proxy-66r2g" Feb 12 21:59:39.106641 kubelet[1981]: I0212 21:59:39.106626 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrn2q\" (UniqueName: \"kubernetes.io/projected/4160b673-4d26-4599-b01b-76d9de7b67ba-kube-api-access-rrn2q\") pod \"kube-proxy-66r2g\" (UID: \"4160b673-4d26-4599-b01b-76d9de7b67ba\") " pod="kube-system/kube-proxy-66r2g" Feb 12 21:59:39.106705 systemd[1]: Created slice kubepods-burstable-pod41bca072_1a8a_48fa_96f4_a1f3db33f2e9.slice. Feb 12 21:59:39.107645 kubelet[1981]: I0212 21:59:39.107627 1981 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:59:39.405023 env[1564]: time="2024-02-12T21:59:39.404288079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66r2g,Uid:4160b673-4d26-4599-b01b-76d9de7b67ba,Namespace:kube-system,Attempt:0,}" Feb 12 21:59:39.417549 env[1564]: time="2024-02-12T21:59:39.417504704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9s9d,Uid:41bca072-1a8a-48fa-96f4-a1f3db33f2e9,Namespace:kube-system,Attempt:0,}" Feb 12 21:59:40.000990 env[1564]: time="2024-02-12T21:59:40.000934372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.004787 env[1564]: time="2024-02-12T21:59:40.004738090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.007191 env[1564]: time="2024-02-12T21:59:40.007145474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.010679 env[1564]: time="2024-02-12T21:59:40.010587245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.011790 env[1564]: time="2024-02-12T21:59:40.011752501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.014147 env[1564]: time="2024-02-12T21:59:40.014101067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.015554 env[1564]: time="2024-02-12T21:59:40.015516411Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.016946 env[1564]: time="2024-02-12T21:59:40.016897276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:40.063820 env[1564]: time="2024-02-12T21:59:40.063742483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:59:40.064115 env[1564]: time="2024-02-12T21:59:40.064068713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:59:40.064357 env[1564]: time="2024-02-12T21:59:40.064272730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:59:40.064746 env[1564]: time="2024-02-12T21:59:40.064704441Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec pid=2039 runtime=io.containerd.runc.v2 Feb 12 21:59:40.065623 env[1564]: time="2024-02-12T21:59:40.065556445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:59:40.065740 env[1564]: time="2024-02-12T21:59:40.065638113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:59:40.065740 env[1564]: time="2024-02-12T21:59:40.065671552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:59:40.065975 env[1564]: time="2024-02-12T21:59:40.065931823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c377f4cc5d6e02c01531046714296c0edbb250bef7fa5112203e7ad885790fc5 pid=2042 runtime=io.containerd.runc.v2 Feb 12 21:59:40.074707 kubelet[1981]: E0212 21:59:40.074667 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:40.101873 systemd[1]: Started cri-containerd-c377f4cc5d6e02c01531046714296c0edbb250bef7fa5112203e7ad885790fc5.scope. Feb 12 21:59:40.111200 systemd[1]: Started cri-containerd-b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec.scope. Feb 12 21:59:40.219111 env[1564]: time="2024-02-12T21:59:40.219064567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66r2g,Uid:4160b673-4d26-4599-b01b-76d9de7b67ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c377f4cc5d6e02c01531046714296c0edbb250bef7fa5112203e7ad885790fc5\"" Feb 12 21:59:40.225941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341807500.mount: Deactivated successfully. Feb 12 21:59:40.229866 env[1564]: time="2024-02-12T21:59:40.229252381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s9s9d,Uid:41bca072-1a8a-48fa-96f4-a1f3db33f2e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\"" Feb 12 21:59:40.230014 env[1564]: time="2024-02-12T21:59:40.229974357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 21:59:41.075133 kubelet[1981]: E0212 21:59:41.075030 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:41.517418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202063631.mount: Deactivated successfully. Feb 12 21:59:42.076021 kubelet[1981]: E0212 21:59:42.075886 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:42.309125 env[1564]: time="2024-02-12T21:59:42.309066373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:42.312760 env[1564]: time="2024-02-12T21:59:42.312715432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:42.314521 env[1564]: time="2024-02-12T21:59:42.314476746Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:42.316470 env[1564]: time="2024-02-12T21:59:42.316366265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:42.317007 env[1564]: time="2024-02-12T21:59:42.316964591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 21:59:42.318758 env[1564]: time="2024-02-12T21:59:42.318717934Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 21:59:42.320122 env[1564]: time="2024-02-12T21:59:42.320085394Z" level=info msg="CreateContainer within sandbox \"c377f4cc5d6e02c01531046714296c0edbb250bef7fa5112203e7ad885790fc5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 21:59:42.338642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596139890.mount: Deactivated successfully. Feb 12 21:59:42.344339 env[1564]: time="2024-02-12T21:59:42.344289006Z" level=info msg="CreateContainer within sandbox \"c377f4cc5d6e02c01531046714296c0edbb250bef7fa5112203e7ad885790fc5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b287f081b29856263d863d282bf664521b5eb6e343b8209ebc7dde79ab55c3d9\"" Feb 12 21:59:42.345125 env[1564]: time="2024-02-12T21:59:42.345087344Z" level=info msg="StartContainer for \"b287f081b29856263d863d282bf664521b5eb6e343b8209ebc7dde79ab55c3d9\"" Feb 12 21:59:42.372759 systemd[1]: Started cri-containerd-b287f081b29856263d863d282bf664521b5eb6e343b8209ebc7dde79ab55c3d9.scope. Feb 12 21:59:42.415164 env[1564]: time="2024-02-12T21:59:42.415109652Z" level=info msg="StartContainer for \"b287f081b29856263d863d282bf664521b5eb6e343b8209ebc7dde79ab55c3d9\" returns successfully" Feb 12 21:59:43.076303 kubelet[1981]: E0212 21:59:43.076260 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:43.314608 kubelet[1981]: I0212 21:59:43.314562 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-66r2g" podStartSLOduration=3.226053841 podCreationTimestamp="2024-02-12 21:59:38 +0000 UTC" firstStartedPulling="2024-02-12 21:59:40.229289241 +0000 UTC m=+3.530547522" lastFinishedPulling="2024-02-12 21:59:42.317586193 +0000 UTC m=+5.618844477" observedRunningTime="2024-02-12 21:59:43.314203894 +0000 UTC m=+6.615462187" watchObservedRunningTime="2024-02-12 21:59:43.314350796 +0000 UTC m=+6.615609090" Feb 12 21:59:44.077500 kubelet[1981]: E0212 21:59:44.077422 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:45.082262 kubelet[1981]: E0212 21:59:45.081372 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:46.082950 kubelet[1981]: E0212 21:59:46.082569 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:47.083261 kubelet[1981]: E0212 21:59:47.083151 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:48.084080 kubelet[1981]: E0212 21:59:48.084017 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:48.744704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780179085.mount: Deactivated successfully. Feb 12 21:59:49.085142 kubelet[1981]: E0212 21:59:49.084736 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:50.085933 kubelet[1981]: E0212 21:59:50.085847 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:51.086143 kubelet[1981]: E0212 21:59:51.086103 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:52.086317 kubelet[1981]: E0212 21:59:52.086247 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:52.367643 env[1564]: time="2024-02-12T21:59:52.367261055Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:52.370376 env[1564]: time="2024-02-12T21:59:52.370333346Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:52.372247 env[1564]: time="2024-02-12T21:59:52.372212036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:59:52.372759 env[1564]: time="2024-02-12T21:59:52.372725327Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 21:59:52.375112 env[1564]: time="2024-02-12T21:59:52.375078365Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:59:52.389837 env[1564]: time="2024-02-12T21:59:52.389788673Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\"" Feb 12 21:59:52.390686 env[1564]: time="2024-02-12T21:59:52.390652376Z" level=info msg="StartContainer for \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\"" Feb 12 21:59:52.417773 systemd[1]: Started cri-containerd-e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac.scope. Feb 12 21:59:52.445743 env[1564]: time="2024-02-12T21:59:52.445693321Z" level=info msg="StartContainer for \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\" returns successfully" Feb 12 21:59:52.459920 systemd[1]: cri-containerd-e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac.scope: Deactivated successfully. Feb 12 21:59:52.486202 amazon-ssm-agent[1539]: 2024-02-12 21:59:52 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 21:59:52.823204 env[1564]: time="2024-02-12T21:59:52.822056771Z" level=info msg="shim disconnected" id=e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac Feb 12 21:59:52.823204 env[1564]: time="2024-02-12T21:59:52.822123073Z" level=warning msg="cleaning up after shim disconnected" id=e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac namespace=k8s.io Feb 12 21:59:52.823204 env[1564]: time="2024-02-12T21:59:52.822136663Z" level=info msg="cleaning up dead shim" Feb 12 21:59:52.837166 env[1564]: time="2024-02-12T21:59:52.836559400Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2316 runtime=io.containerd.runc.v2\n" Feb 12 21:59:53.087087 kubelet[1981]: E0212 21:59:53.086959 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:53.336636 env[1564]: time="2024-02-12T21:59:53.336576041Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:59:53.360344 env[1564]: time="2024-02-12T21:59:53.360109327Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\"" Feb 12 21:59:53.361681 env[1564]: time="2024-02-12T21:59:53.361648947Z" level=info msg="StartContainer for \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\"" Feb 12 21:59:53.387806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac-rootfs.mount: Deactivated successfully. Feb 12 21:59:53.427177 systemd[1]: Started cri-containerd-93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07.scope. Feb 12 21:59:53.477782 env[1564]: time="2024-02-12T21:59:53.475233557Z" level=info msg="StartContainer for \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\" returns successfully" Feb 12 21:59:53.490019 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:59:53.490903 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:59:53.491591 systemd[1]: Stopping systemd-sysctl.service... Feb 12 21:59:53.494091 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:59:53.499086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 21:59:53.503344 systemd[1]: cri-containerd-93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07.scope: Deactivated successfully. Feb 12 21:59:53.515460 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:59:53.532170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07-rootfs.mount: Deactivated successfully. Feb 12 21:59:53.546493 env[1564]: time="2024-02-12T21:59:53.546358255Z" level=info msg="shim disconnected" id=93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07 Feb 12 21:59:53.546843 env[1564]: time="2024-02-12T21:59:53.546404895Z" level=warning msg="cleaning up after shim disconnected" id=93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07 namespace=k8s.io Feb 12 21:59:53.546843 env[1564]: time="2024-02-12T21:59:53.546543548Z" level=info msg="cleaning up dead shim" Feb 12 21:59:53.556922 env[1564]: time="2024-02-12T21:59:53.556860876Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2379 runtime=io.containerd.runc.v2\n" Feb 12 21:59:53.807828 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 21:59:54.087386 kubelet[1981]: E0212 21:59:54.087154 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:54.349122 env[1564]: time="2024-02-12T21:59:54.348690366Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:59:54.379841 env[1564]: time="2024-02-12T21:59:54.379796696Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\"" Feb 12 21:59:54.380617 env[1564]: time="2024-02-12T21:59:54.380578918Z" level=info msg="StartContainer for \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\"" Feb 12 21:59:54.388707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608680263.mount: Deactivated successfully. Feb 12 21:59:54.445680 systemd[1]: run-containerd-runc-k8s.io-84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c-runc.IJYwqs.mount: Deactivated successfully. Feb 12 21:59:54.450597 systemd[1]: Started cri-containerd-84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c.scope. Feb 12 21:59:54.497278 systemd[1]: cri-containerd-84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c.scope: Deactivated successfully. Feb 12 21:59:54.504303 env[1564]: time="2024-02-12T21:59:54.504257011Z" level=info msg="StartContainer for \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\" returns successfully" Feb 12 21:59:54.538655 env[1564]: time="2024-02-12T21:59:54.538599897Z" level=info msg="shim disconnected" id=84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c Feb 12 21:59:54.538920 env[1564]: time="2024-02-12T21:59:54.538658692Z" level=warning msg="cleaning up after shim disconnected" id=84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c namespace=k8s.io Feb 12 21:59:54.538920 env[1564]: time="2024-02-12T21:59:54.538670765Z" level=info msg="cleaning up dead shim" Feb 12 21:59:54.548075 env[1564]: time="2024-02-12T21:59:54.548028338Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2442 runtime=io.containerd.runc.v2\n" Feb 12 21:59:55.088387 kubelet[1981]: E0212 21:59:55.088342 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:55.352290 env[1564]: time="2024-02-12T21:59:55.352011022Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:59:55.368383 env[1564]: time="2024-02-12T21:59:55.368330998Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\"" Feb 12 21:59:55.369108 env[1564]: time="2024-02-12T21:59:55.369071686Z" level=info msg="StartContainer for \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\"" Feb 12 21:59:55.395127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c-rootfs.mount: Deactivated successfully. Feb 12 21:59:55.428341 systemd[1]: run-containerd-runc-k8s.io-7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c-runc.BFNvc3.mount: Deactivated successfully. Feb 12 21:59:55.434707 systemd[1]: Started cri-containerd-7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c.scope. Feb 12 21:59:55.469714 systemd[1]: cri-containerd-7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c.scope: Deactivated successfully. Feb 12 21:59:55.472402 env[1564]: time="2024-02-12T21:59:55.472078080Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41bca072_1a8a_48fa_96f4_a1f3db33f2e9.slice/cri-containerd-7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c.scope/memory.events\": no such file or directory" Feb 12 21:59:55.474727 env[1564]: time="2024-02-12T21:59:55.474605265Z" level=info msg="StartContainer for \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\" returns successfully" Feb 12 21:59:55.495637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c-rootfs.mount: Deactivated successfully. Feb 12 21:59:55.512480 env[1564]: time="2024-02-12T21:59:55.512408893Z" level=info msg="shim disconnected" id=7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c Feb 12 21:59:55.512480 env[1564]: time="2024-02-12T21:59:55.512479964Z" level=warning msg="cleaning up after shim disconnected" id=7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c namespace=k8s.io Feb 12 21:59:55.512480 env[1564]: time="2024-02-12T21:59:55.512493175Z" level=info msg="cleaning up dead shim" Feb 12 21:59:55.521889 env[1564]: time="2024-02-12T21:59:55.521842031Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:59:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2496 runtime=io.containerd.runc.v2\n" Feb 12 21:59:56.088962 kubelet[1981]: E0212 21:59:56.088920 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:56.357087 env[1564]: time="2024-02-12T21:59:56.356868226Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:59:56.376360 env[1564]: time="2024-02-12T21:59:56.376312939Z" level=info msg="CreateContainer within sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\"" Feb 12 21:59:56.377200 env[1564]: time="2024-02-12T21:59:56.377169162Z" level=info msg="StartContainer for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\"" Feb 12 21:59:56.412619 systemd[1]: run-containerd-runc-k8s.io-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba-runc.3BA5Wv.mount: Deactivated successfully. Feb 12 21:59:56.414871 systemd[1]: Started cri-containerd-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba.scope. Feb 12 21:59:56.454355 env[1564]: time="2024-02-12T21:59:56.454305895Z" level=info msg="StartContainer for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" returns successfully" Feb 12 21:59:56.658446 kubelet[1981]: I0212 21:59:56.656635 1981 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 21:59:56.923076 kernel: Initializing XFRM netlink socket Feb 12 21:59:57.071526 kubelet[1981]: E0212 21:59:57.071419 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:57.090271 kubelet[1981]: E0212 21:59:57.090228 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:57.389145 kubelet[1981]: I0212 21:59:57.389044 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-s9s9d" podStartSLOduration=7.247101634 podCreationTimestamp="2024-02-12 21:59:38 +0000 UTC" firstStartedPulling="2024-02-12 21:59:40.231181312 +0000 UTC m=+3.532439593" lastFinishedPulling="2024-02-12 21:59:52.373073139 +0000 UTC m=+15.674331421" observedRunningTime="2024-02-12 21:59:57.38880677 +0000 UTC m=+20.690065066" watchObservedRunningTime="2024-02-12 21:59:57.388993462 +0000 UTC m=+20.690251748" Feb 12 21:59:57.389600 systemd[1]: run-containerd-runc-k8s.io-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba-runc.jrT9gy.mount: Deactivated successfully. Feb 12 21:59:58.090735 kubelet[1981]: E0212 21:59:58.090680 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:58.613363 systemd-networkd[1375]: cilium_host: Link UP Feb 12 21:59:58.618935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 21:59:58.619032 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 21:59:58.616599 systemd-networkd[1375]: cilium_net: Link UP Feb 12 21:59:58.616834 systemd-networkd[1375]: cilium_net: Gained carrier Feb 12 21:59:58.619390 systemd-networkd[1375]: cilium_host: Gained carrier Feb 12 21:59:58.619516 (udev-worker)[2637]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:59:58.619887 (udev-worker)[2636]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:59:58.793356 (udev-worker)[2595]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:59:58.804055 systemd-networkd[1375]: cilium_vxlan: Link UP Feb 12 21:59:58.804064 systemd-networkd[1375]: cilium_vxlan: Gained carrier Feb 12 21:59:58.821558 systemd-networkd[1375]: cilium_host: Gained IPv6LL Feb 12 21:59:59.069491 kernel: NET: Registered PF_ALG protocol family Feb 12 21:59:59.091480 kubelet[1981]: E0212 21:59:59.091419 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:59:59.109568 systemd-networkd[1375]: cilium_net: Gained IPv6LL Feb 12 21:59:59.949418 (udev-worker)[2658]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:59:59.957366 systemd-networkd[1375]: lxc_health: Link UP Feb 12 21:59:59.965764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:59:59.970125 systemd-networkd[1375]: lxc_health: Gained carrier Feb 12 22:00:00.092808 kubelet[1981]: E0212 22:00:00.092561 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:00.293578 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Feb 12 22:00:01.095899 kubelet[1981]: E0212 22:00:01.095799 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:01.390398 systemd-networkd[1375]: lxc_health: Gained IPv6LL Feb 12 22:00:02.099376 kubelet[1981]: E0212 22:00:02.099331 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:03.101988 kubelet[1981]: E0212 22:00:03.101947 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:03.515422 kubelet[1981]: I0212 22:00:03.515378 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 22:00:03.526657 systemd[1]: Created slice kubepods-besteffort-podb4e4f5ad_e508_4470_98e1_713753518d1b.slice. Feb 12 22:00:03.632466 kubelet[1981]: I0212 22:00:03.632418 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htd8x\" (UniqueName: \"kubernetes.io/projected/b4e4f5ad-e508-4470-98e1-713753518d1b-kube-api-access-htd8x\") pod \"nginx-deployment-845c78c8b9-8sxfc\" (UID: \"b4e4f5ad-e508-4470-98e1-713753518d1b\") " pod="default/nginx-deployment-845c78c8b9-8sxfc" Feb 12 22:00:03.842224 env[1564]: time="2024-02-12T22:00:03.842079222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-8sxfc,Uid:b4e4f5ad-e508-4470-98e1-713753518d1b,Namespace:default,Attempt:0,}" Feb 12 22:00:03.944280 systemd-networkd[1375]: lxcb085a8c7eb5a: Link UP Feb 12 22:00:03.951909 (udev-worker)[2992]: Network interface NamePolicy= disabled on kernel command line. Feb 12 22:00:03.974606 kernel: eth0: renamed from tmpc4479 Feb 12 22:00:03.990322 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 22:00:03.990497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb085a8c7eb5a: link becomes ready Feb 12 22:00:03.990805 systemd-networkd[1375]: lxcb085a8c7eb5a: Gained carrier Feb 12 22:00:04.104932 kubelet[1981]: E0212 22:00:04.104796 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:05.105342 kubelet[1981]: E0212 22:00:05.105293 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:05.741707 systemd-networkd[1375]: lxcb085a8c7eb5a: Gained IPv6LL Feb 12 22:00:06.107320 kubelet[1981]: E0212 22:00:06.107149 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:07.108290 kubelet[1981]: E0212 22:00:07.108246 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:08.109299 kubelet[1981]: E0212 22:00:08.109257 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:09.073539 update_engine[1552]: I0212 22:00:09.073495 1552 update_attempter.cc:509] Updating boot flags... Feb 12 22:00:09.111467 kubelet[1981]: E0212 22:00:09.111084 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:09.973209 env[1564]: time="2024-02-12T22:00:09.973125047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 22:00:09.973209 env[1564]: time="2024-02-12T22:00:09.973167391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 22:00:09.974203 env[1564]: time="2024-02-12T22:00:09.973184353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 22:00:09.974203 env[1564]: time="2024-02-12T22:00:09.973560533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c44792ef5e450d0945f485f9b78067a14585967d81754d1b1a71ad21e51f79ad pid=3197 runtime=io.containerd.runc.v2 Feb 12 22:00:10.009643 systemd[1]: Started cri-containerd-c44792ef5e450d0945f485f9b78067a14585967d81754d1b1a71ad21e51f79ad.scope. Feb 12 22:00:10.071391 env[1564]: time="2024-02-12T22:00:10.071341355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-8sxfc,Uid:b4e4f5ad-e508-4470-98e1-713753518d1b,Namespace:default,Attempt:0,} returns sandbox id \"c44792ef5e450d0945f485f9b78067a14585967d81754d1b1a71ad21e51f79ad\"" Feb 12 22:00:10.076117 env[1564]: time="2024-02-12T22:00:10.076071091Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 22:00:10.112288 kubelet[1981]: E0212 22:00:10.112229 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:11.112559 kubelet[1981]: E0212 22:00:11.112505 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:12.112860 kubelet[1981]: E0212 22:00:12.112811 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:13.113542 kubelet[1981]: E0212 22:00:13.113493 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:14.113937 kubelet[1981]: E0212 22:00:14.113884 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:15.114087 kubelet[1981]: E0212 22:00:15.114004 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:16.115238 kubelet[1981]: E0212 22:00:16.115186 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:17.071296 kubelet[1981]: E0212 22:00:17.071248 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:17.115407 kubelet[1981]: E0212 22:00:17.115367 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:18.116211 kubelet[1981]: E0212 22:00:18.116159 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:19.117133 kubelet[1981]: E0212 22:00:19.117083 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:20.117932 kubelet[1981]: E0212 22:00:20.117889 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:21.120099 kubelet[1981]: E0212 22:00:21.120053 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:22.120975 kubelet[1981]: E0212 22:00:22.120906 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:22.510550 amazon-ssm-agent[1539]: 2024-02-12 22:00:22 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 22:00:23.123124 kubelet[1981]: E0212 22:00:23.122802 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:23.828531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519081947.mount: Deactivated successfully. Feb 12 22:00:24.125767 kubelet[1981]: E0212 22:00:24.124205 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:25.126521 kubelet[1981]: E0212 22:00:25.126470 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:25.188413 env[1564]: time="2024-02-12T22:00:25.188353216Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:25.192767 env[1564]: time="2024-02-12T22:00:25.192644117Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:25.195582 env[1564]: time="2024-02-12T22:00:25.195536269Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:25.198331 env[1564]: time="2024-02-12T22:00:25.198285182Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:25.199399 env[1564]: time="2024-02-12T22:00:25.199358985Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 22:00:25.202912 env[1564]: time="2024-02-12T22:00:25.202870255Z" level=info msg="CreateContainer within sandbox \"c44792ef5e450d0945f485f9b78067a14585967d81754d1b1a71ad21e51f79ad\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 22:00:25.223604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906620394.mount: Deactivated successfully. Feb 12 22:00:25.237750 env[1564]: time="2024-02-12T22:00:25.237700025Z" level=info msg="CreateContainer within sandbox \"c44792ef5e450d0945f485f9b78067a14585967d81754d1b1a71ad21e51f79ad\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"390bb84e52d18b1901438383264e9324302dec0dd8c39a6f450e4ff98cd1159e\"" Feb 12 22:00:25.238640 env[1564]: time="2024-02-12T22:00:25.238600363Z" level=info msg="StartContainer for \"390bb84e52d18b1901438383264e9324302dec0dd8c39a6f450e4ff98cd1159e\"" Feb 12 22:00:25.287232 systemd[1]: Started cri-containerd-390bb84e52d18b1901438383264e9324302dec0dd8c39a6f450e4ff98cd1159e.scope. Feb 12 22:00:25.325633 env[1564]: time="2024-02-12T22:00:25.325576293Z" level=info msg="StartContainer for \"390bb84e52d18b1901438383264e9324302dec0dd8c39a6f450e4ff98cd1159e\" returns successfully" Feb 12 22:00:25.507488 kubelet[1981]: I0212 22:00:25.507452 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-8sxfc" podStartSLOduration=7.382671862 podCreationTimestamp="2024-02-12 22:00:03 +0000 UTC" firstStartedPulling="2024-02-12 22:00:10.075105325 +0000 UTC m=+33.376363608" lastFinishedPulling="2024-02-12 22:00:25.199821091 +0000 UTC m=+48.501079378" observedRunningTime="2024-02-12 22:00:25.507029234 +0000 UTC m=+48.808287527" watchObservedRunningTime="2024-02-12 22:00:25.507387632 +0000 UTC m=+48.808645926" Feb 12 22:00:26.127559 kubelet[1981]: E0212 22:00:26.127510 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:27.128446 kubelet[1981]: E0212 22:00:27.128381 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:28.130062 kubelet[1981]: E0212 22:00:28.129994 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:28.759029 kubelet[1981]: I0212 22:00:28.758981 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 22:00:28.767200 systemd[1]: Created slice kubepods-besteffort-pod6df3e303_77e7_451f_aab8_77d588b34599.slice. Feb 12 22:00:28.917515 kubelet[1981]: I0212 22:00:28.917464 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6df3e303-77e7-451f-aab8-77d588b34599-data\") pod \"nfs-server-provisioner-0\" (UID: \"6df3e303-77e7-451f-aab8-77d588b34599\") " pod="default/nfs-server-provisioner-0" Feb 12 22:00:28.917515 kubelet[1981]: I0212 22:00:28.917519 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6522k\" (UniqueName: \"kubernetes.io/projected/6df3e303-77e7-451f-aab8-77d588b34599-kube-api-access-6522k\") pod \"nfs-server-provisioner-0\" (UID: \"6df3e303-77e7-451f-aab8-77d588b34599\") " pod="default/nfs-server-provisioner-0" Feb 12 22:00:29.071895 env[1564]: time="2024-02-12T22:00:29.071395310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6df3e303-77e7-451f-aab8-77d588b34599,Namespace:default,Attempt:0,}" Feb 12 22:00:29.130950 kubelet[1981]: E0212 22:00:29.130870 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:29.150810 (udev-worker)[3290]: Network interface NamePolicy= disabled on kernel command line. Feb 12 22:00:29.151672 (udev-worker)[3289]: Network interface NamePolicy= disabled on kernel command line. Feb 12 22:00:29.152381 systemd-networkd[1375]: lxca82fda737e0f: Link UP Feb 12 22:00:29.162612 kernel: eth0: renamed from tmp0a69f Feb 12 22:00:29.173112 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 22:00:29.173334 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca82fda737e0f: link becomes ready Feb 12 22:00:29.172152 systemd-networkd[1375]: lxca82fda737e0f: Gained carrier Feb 12 22:00:29.514494 env[1564]: time="2024-02-12T22:00:29.514368979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 22:00:29.514670 env[1564]: time="2024-02-12T22:00:29.514524089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 22:00:29.514670 env[1564]: time="2024-02-12T22:00:29.514556157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 22:00:29.514797 env[1564]: time="2024-02-12T22:00:29.514761020Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a69f3a29a35d2694429e135cb843adc0af4f2a1deb692dca94625e14a6ea596 pid=3317 runtime=io.containerd.runc.v2 Feb 12 22:00:29.547918 systemd[1]: Started cri-containerd-0a69f3a29a35d2694429e135cb843adc0af4f2a1deb692dca94625e14a6ea596.scope. Feb 12 22:00:29.607801 env[1564]: time="2024-02-12T22:00:29.607747316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6df3e303-77e7-451f-aab8-77d588b34599,Namespace:default,Attempt:0,} returns sandbox id \"0a69f3a29a35d2694429e135cb843adc0af4f2a1deb692dca94625e14a6ea596\"" Feb 12 22:00:29.609498 env[1564]: time="2024-02-12T22:00:29.609455995Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 22:00:30.131719 kubelet[1981]: E0212 22:00:30.131684 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:30.388735 systemd-networkd[1375]: lxca82fda737e0f: Gained IPv6LL Feb 12 22:00:31.138622 kubelet[1981]: E0212 22:00:31.138411 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:32.139197 kubelet[1981]: E0212 22:00:32.139158 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:32.832178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268920403.mount: Deactivated successfully. Feb 12 22:00:33.139701 kubelet[1981]: E0212 22:00:33.139593 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:34.140673 kubelet[1981]: E0212 22:00:34.140604 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:35.141866 kubelet[1981]: E0212 22:00:35.141795 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:35.934795 env[1564]: time="2024-02-12T22:00:35.934554965Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:35.938276 env[1564]: time="2024-02-12T22:00:35.938229696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:35.942454 env[1564]: time="2024-02-12T22:00:35.942397173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:35.948275 env[1564]: time="2024-02-12T22:00:35.948230452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:35.950621 env[1564]: time="2024-02-12T22:00:35.950464952Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 22:00:35.956347 env[1564]: time="2024-02-12T22:00:35.956274514Z" level=info msg="CreateContainer within sandbox \"0a69f3a29a35d2694429e135cb843adc0af4f2a1deb692dca94625e14a6ea596\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 22:00:35.985310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260375341.mount: Deactivated successfully. Feb 12 22:00:35.998765 env[1564]: time="2024-02-12T22:00:35.996827684Z" level=info msg="CreateContainer within sandbox \"0a69f3a29a35d2694429e135cb843adc0af4f2a1deb692dca94625e14a6ea596\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a5a10053223054d4f7c0da92c917cad1055ef993cbd5c93e6312a35ae8381ade\"" Feb 12 22:00:35.999618 env[1564]: time="2024-02-12T22:00:35.999549121Z" level=info msg="StartContainer for \"a5a10053223054d4f7c0da92c917cad1055ef993cbd5c93e6312a35ae8381ade\"" Feb 12 22:00:36.034846 systemd[1]: Started cri-containerd-a5a10053223054d4f7c0da92c917cad1055ef993cbd5c93e6312a35ae8381ade.scope. Feb 12 22:00:36.084564 env[1564]: time="2024-02-12T22:00:36.084385613Z" level=info msg="StartContainer for \"a5a10053223054d4f7c0da92c917cad1055ef993cbd5c93e6312a35ae8381ade\" returns successfully" Feb 12 22:00:36.142913 kubelet[1981]: E0212 22:00:36.142857 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:36.520985 kubelet[1981]: I0212 22:00:36.520950 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.177675327 podCreationTimestamp="2024-02-12 22:00:28 +0000 UTC" firstStartedPulling="2024-02-12 22:00:29.608976069 +0000 UTC m=+52.910234351" lastFinishedPulling="2024-02-12 22:00:35.952205896 +0000 UTC m=+59.253464182" observedRunningTime="2024-02-12 22:00:36.520022833 +0000 UTC m=+59.821281127" watchObservedRunningTime="2024-02-12 22:00:36.520905158 +0000 UTC m=+59.822163450" Feb 12 22:00:37.070985 kubelet[1981]: E0212 22:00:37.070924 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:37.144007 kubelet[1981]: E0212 22:00:37.143949 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:38.144632 kubelet[1981]: E0212 22:00:38.144582 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:39.144832 kubelet[1981]: E0212 22:00:39.144754 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:40.144951 kubelet[1981]: E0212 22:00:40.144910 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:41.146106 kubelet[1981]: E0212 22:00:41.146051 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:42.147084 kubelet[1981]: E0212 22:00:42.147028 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:43.147607 kubelet[1981]: E0212 22:00:43.147457 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:44.148665 kubelet[1981]: E0212 22:00:44.148616 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:45.149387 kubelet[1981]: E0212 22:00:45.149335 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:46.150352 kubelet[1981]: E0212 22:00:46.150300 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:46.189595 kubelet[1981]: I0212 22:00:46.189558 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 22:00:46.196080 systemd[1]: Created slice kubepods-besteffort-pod0566e28f_40ae_41af_87f7_df669e9de233.slice. Feb 12 22:00:46.353828 kubelet[1981]: I0212 22:00:46.353784 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-778ff1c0-de93-449f-a661-200472e8a8ce\" (UniqueName: \"kubernetes.io/nfs/0566e28f-40ae-41af-87f7-df669e9de233-pvc-778ff1c0-de93-449f-a661-200472e8a8ce\") pod \"test-pod-1\" (UID: \"0566e28f-40ae-41af-87f7-df669e9de233\") " pod="default/test-pod-1" Feb 12 22:00:46.354013 kubelet[1981]: I0212 22:00:46.353844 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ltcq\" (UniqueName: \"kubernetes.io/projected/0566e28f-40ae-41af-87f7-df669e9de233-kube-api-access-2ltcq\") pod \"test-pod-1\" (UID: \"0566e28f-40ae-41af-87f7-df669e9de233\") " pod="default/test-pod-1" Feb 12 22:00:46.515540 kernel: FS-Cache: Loaded Feb 12 22:00:46.572610 kernel: RPC: Registered named UNIX socket transport module. Feb 12 22:00:46.573012 kernel: RPC: Registered udp transport module. Feb 12 22:00:46.573164 kernel: RPC: Registered tcp transport module. Feb 12 22:00:46.573220 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 22:00:46.638470 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 22:00:46.884674 kernel: NFS: Registering the id_resolver key type Feb 12 22:00:46.884901 kernel: Key type id_resolver registered Feb 12 22:00:46.884936 kernel: Key type id_legacy registered Feb 12 22:00:46.965256 nfsidmap[3491]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 22:00:46.969937 nfsidmap[3492]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 22:00:47.101450 env[1564]: time="2024-02-12T22:00:47.101380599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0566e28f-40ae-41af-87f7-df669e9de233,Namespace:default,Attempt:0,}" Feb 12 22:00:47.144185 (udev-worker)[3484]: Network interface NamePolicy= disabled on kernel command line. Feb 12 22:00:47.145058 (udev-worker)[3487]: Network interface NamePolicy= disabled on kernel command line. Feb 12 22:00:47.148971 systemd-networkd[1375]: lxc0f81072fc85c: Link UP Feb 12 22:00:47.150812 kubelet[1981]: E0212 22:00:47.150550 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:47.155469 kernel: eth0: renamed from tmp34134 Feb 12 22:00:47.160870 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 22:00:47.160965 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0f81072fc85c: link becomes ready Feb 12 22:00:47.161231 systemd-networkd[1375]: lxc0f81072fc85c: Gained carrier Feb 12 22:00:47.499566 env[1564]: time="2024-02-12T22:00:47.499491165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 22:00:47.499775 env[1564]: time="2024-02-12T22:00:47.499554398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 22:00:47.499867 env[1564]: time="2024-02-12T22:00:47.499764041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 22:00:47.501574 env[1564]: time="2024-02-12T22:00:47.500483733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/341343374cdebd4d79f8625152189db47fde316430eed65856ce5fc90d660cf1 pid=3518 runtime=io.containerd.runc.v2 Feb 12 22:00:47.532573 systemd[1]: run-containerd-runc-k8s.io-341343374cdebd4d79f8625152189db47fde316430eed65856ce5fc90d660cf1-runc.lUQ8kM.mount: Deactivated successfully. Feb 12 22:00:47.544530 systemd[1]: Started cri-containerd-341343374cdebd4d79f8625152189db47fde316430eed65856ce5fc90d660cf1.scope. Feb 12 22:00:47.602618 env[1564]: time="2024-02-12T22:00:47.602580555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0566e28f-40ae-41af-87f7-df669e9de233,Namespace:default,Attempt:0,} returns sandbox id \"341343374cdebd4d79f8625152189db47fde316430eed65856ce5fc90d660cf1\"" Feb 12 22:00:47.605039 env[1564]: time="2024-02-12T22:00:47.604971986Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 22:00:48.151284 kubelet[1981]: E0212 22:00:48.151232 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:48.168695 env[1564]: time="2024-02-12T22:00:48.168644439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:48.171633 env[1564]: time="2024-02-12T22:00:48.171588713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:48.174538 env[1564]: time="2024-02-12T22:00:48.174497734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:48.184469 env[1564]: time="2024-02-12T22:00:48.184413376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:00:48.184875 env[1564]: time="2024-02-12T22:00:48.184840824Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 22:00:48.187084 env[1564]: time="2024-02-12T22:00:48.187027841Z" level=info msg="CreateContainer within sandbox \"341343374cdebd4d79f8625152189db47fde316430eed65856ce5fc90d660cf1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 22:00:48.204985 env[1564]: time="2024-02-12T22:00:48.204937797Z" level=info msg="CreateContainer within sandbox \"341343374cdebd4d79f8625152189db47fde316430eed65856ce5fc90d660cf1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"317971006189c1515605adde93caa2788465d3dbd2677aab835ceb7a77cecaac\"" Feb 12 22:00:48.205862 env[1564]: time="2024-02-12T22:00:48.205822804Z" level=info msg="StartContainer for \"317971006189c1515605adde93caa2788465d3dbd2677aab835ceb7a77cecaac\"" Feb 12 22:00:48.224588 systemd[1]: Started cri-containerd-317971006189c1515605adde93caa2788465d3dbd2677aab835ceb7a77cecaac.scope. Feb 12 22:00:48.273569 env[1564]: time="2024-02-12T22:00:48.273513215Z" level=info msg="StartContainer for \"317971006189c1515605adde93caa2788465d3dbd2677aab835ceb7a77cecaac\" returns successfully" Feb 12 22:00:48.429746 systemd-networkd[1375]: lxc0f81072fc85c: Gained IPv6LL Feb 12 22:00:48.555975 kubelet[1981]: I0212 22:00:48.555934 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.975134886 podCreationTimestamp="2024-02-12 22:00:29 +0000 UTC" firstStartedPulling="2024-02-12 22:00:47.604414532 +0000 UTC m=+70.905672802" lastFinishedPulling="2024-02-12 22:00:48.185173999 +0000 UTC m=+71.486432273" observedRunningTime="2024-02-12 22:00:48.555596707 +0000 UTC m=+71.856855000" watchObservedRunningTime="2024-02-12 22:00:48.555894357 +0000 UTC m=+71.857152685" Feb 12 22:00:49.152006 kubelet[1981]: E0212 22:00:49.151956 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:50.152615 kubelet[1981]: E0212 22:00:50.152513 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:51.152728 kubelet[1981]: E0212 22:00:51.152665 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:52.153299 kubelet[1981]: E0212 22:00:52.153250 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:53.153448 kubelet[1981]: E0212 22:00:53.153392 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:54.153736 kubelet[1981]: E0212 22:00:54.153658 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:54.772208 systemd[1]: run-containerd-runc-k8s.io-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba-runc.1cGVsG.mount: Deactivated successfully. Feb 12 22:00:54.800998 env[1564]: time="2024-02-12T22:00:54.800930018Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 22:00:54.818879 env[1564]: time="2024-02-12T22:00:54.818830028Z" level=info msg="StopContainer for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" with timeout 1 (s)" Feb 12 22:00:54.819214 env[1564]: time="2024-02-12T22:00:54.819183220Z" level=info msg="Stop container \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" with signal terminated" Feb 12 22:00:54.830091 systemd-networkd[1375]: lxc_health: Link DOWN Feb 12 22:00:54.830100 systemd-networkd[1375]: lxc_health: Lost carrier Feb 12 22:00:54.972014 systemd[1]: cri-containerd-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba.scope: Deactivated successfully. Feb 12 22:00:54.972395 systemd[1]: cri-containerd-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba.scope: Consumed 8.983s CPU time. Feb 12 22:00:55.011230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba-rootfs.mount: Deactivated successfully. Feb 12 22:00:55.033900 env[1564]: time="2024-02-12T22:00:55.033375038Z" level=info msg="shim disconnected" id=ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba Feb 12 22:00:55.033900 env[1564]: time="2024-02-12T22:00:55.033451148Z" level=warning msg="cleaning up after shim disconnected" id=ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba namespace=k8s.io Feb 12 22:00:55.033900 env[1564]: time="2024-02-12T22:00:55.033465056Z" level=info msg="cleaning up dead shim" Feb 12 22:00:55.045110 env[1564]: time="2024-02-12T22:00:55.045056347Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3650 runtime=io.containerd.runc.v2\n" Feb 12 22:00:55.048055 env[1564]: time="2024-02-12T22:00:55.047974831Z" level=info msg="StopContainer for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" returns successfully" Feb 12 22:00:55.049202 env[1564]: time="2024-02-12T22:00:55.049135537Z" level=info msg="StopPodSandbox for \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\"" Feb 12 22:00:55.049378 env[1564]: time="2024-02-12T22:00:55.049232045Z" level=info msg="Container to stop \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 22:00:55.049378 env[1564]: time="2024-02-12T22:00:55.049281931Z" level=info msg="Container to stop \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 22:00:55.049378 env[1564]: time="2024-02-12T22:00:55.049300153Z" level=info msg="Container to stop \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 22:00:55.049378 env[1564]: time="2024-02-12T22:00:55.049319366Z" level=info msg="Container to stop \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 22:00:55.049378 env[1564]: time="2024-02-12T22:00:55.049360986Z" level=info msg="Container to stop \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 22:00:55.054365 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec-shm.mount: Deactivated successfully. Feb 12 22:00:55.063081 systemd[1]: cri-containerd-b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec.scope: Deactivated successfully. Feb 12 22:00:55.099715 env[1564]: time="2024-02-12T22:00:55.099655521Z" level=info msg="shim disconnected" id=b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec Feb 12 22:00:55.099964 env[1564]: time="2024-02-12T22:00:55.099718331Z" level=warning msg="cleaning up after shim disconnected" id=b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec namespace=k8s.io Feb 12 22:00:55.099964 env[1564]: time="2024-02-12T22:00:55.099733706Z" level=info msg="cleaning up dead shim" Feb 12 22:00:55.114208 env[1564]: time="2024-02-12T22:00:55.114160119Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3682 runtime=io.containerd.runc.v2\n" Feb 12 22:00:55.114867 env[1564]: time="2024-02-12T22:00:55.114831721Z" level=info msg="TearDown network for sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" successfully" Feb 12 22:00:55.114867 env[1564]: time="2024-02-12T22:00:55.114861281Z" level=info msg="StopPodSandbox for \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" returns successfully" Feb 12 22:00:55.154804 kubelet[1981]: E0212 22:00:55.154657 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:55.208790 kubelet[1981]: I0212 22:00:55.208737 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjvnb\" (UniqueName: \"kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-kube-api-access-gjvnb\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.208790 kubelet[1981]: I0212 22:00:55.208794 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cni-path\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209143 kubelet[1981]: I0212 22:00:55.208830 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-config-path\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209143 kubelet[1981]: I0212 22:00:55.208856 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-kernel\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209143 kubelet[1981]: I0212 22:00:55.208880 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-xtables-lock\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209143 kubelet[1981]: I0212 22:00:55.208902 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-net\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209143 kubelet[1981]: I0212 22:00:55.209038 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-bpf-maps\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209143 kubelet[1981]: I0212 22:00:55.209069 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hostproc\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209533 kubelet[1981]: I0212 22:00:55.209092 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-lib-modules\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209533 kubelet[1981]: I0212 22:00:55.209117 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-run\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209533 kubelet[1981]: I0212 22:00:55.209145 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-etc-cni-netd\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209533 kubelet[1981]: I0212 22:00:55.209214 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hubble-tls\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209533 kubelet[1981]: I0212 22:00:55.209245 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-cgroup\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.209533 kubelet[1981]: I0212 22:00:55.209279 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-clustermesh-secrets\") pod \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\" (UID: \"41bca072-1a8a-48fa-96f4-a1f3db33f2e9\") " Feb 12 22:00:55.210570 kubelet[1981]: I0212 22:00:55.210027 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210570 kubelet[1981]: I0212 22:00:55.210247 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210570 kubelet[1981]: I0212 22:00:55.210286 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210570 kubelet[1981]: I0212 22:00:55.210313 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210570 kubelet[1981]: I0212 22:00:55.210339 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210880 kubelet[1981]: I0212 22:00:55.210564 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210880 kubelet[1981]: I0212 22:00:55.210594 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210880 kubelet[1981]: I0212 22:00:55.210614 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.210880 kubelet[1981]: W0212 22:00:55.210747 1981 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/41bca072-1a8a-48fa-96f4-a1f3db33f2e9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 22:00:55.211473 kubelet[1981]: I0212 22:00:55.211449 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.211613 kubelet[1981]: I0212 22:00:55.211596 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:55.214285 kubelet[1981]: I0212 22:00:55.214252 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 22:00:55.219785 kubelet[1981]: I0212 22:00:55.219743 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-kube-api-access-gjvnb" (OuterVolumeSpecName: "kube-api-access-gjvnb") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "kube-api-access-gjvnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 22:00:55.220978 kubelet[1981]: I0212 22:00:55.220943 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 22:00:55.222917 kubelet[1981]: I0212 22:00:55.222885 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "41bca072-1a8a-48fa-96f4-a1f3db33f2e9" (UID: "41bca072-1a8a-48fa-96f4-a1f3db33f2e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 22:00:55.294050 systemd[1]: Removed slice kubepods-burstable-pod41bca072_1a8a_48fa_96f4_a1f3db33f2e9.slice. Feb 12 22:00:55.294178 systemd[1]: kubepods-burstable-pod41bca072_1a8a_48fa_96f4_a1f3db33f2e9.slice: Consumed 9.100s CPU time. Feb 12 22:00:55.309589 kubelet[1981]: I0212 22:00:55.309545 1981 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-lib-modules\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309589 kubelet[1981]: I0212 22:00:55.309584 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-run\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309589 kubelet[1981]: I0212 22:00:55.309599 1981 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-etc-cni-netd\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309611 1981 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hubble-tls\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309624 1981 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-bpf-maps\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309636 1981 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-hostproc\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309648 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-cgroup\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309664 1981 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-clustermesh-secrets\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309677 1981 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gjvnb\" (UniqueName: \"kubernetes.io/projected/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-kube-api-access-gjvnb\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309691 1981 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cni-path\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.309844 kubelet[1981]: I0212 22:00:55.309704 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-kernel\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.310045 kubelet[1981]: I0212 22:00:55.309717 1981 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-xtables-lock\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.310045 kubelet[1981]: I0212 22:00:55.309730 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-host-proc-sys-net\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.310045 kubelet[1981]: I0212 22:00:55.309744 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41bca072-1a8a-48fa-96f4-a1f3db33f2e9-cilium-config-path\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:55.560681 kubelet[1981]: I0212 22:00:55.560041 1981 scope.go:115] "RemoveContainer" containerID="ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba" Feb 12 22:00:55.564480 env[1564]: time="2024-02-12T22:00:55.564413265Z" level=info msg="RemoveContainer for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\"" Feb 12 22:00:55.573100 env[1564]: time="2024-02-12T22:00:55.573056406Z" level=info msg="RemoveContainer for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" returns successfully" Feb 12 22:00:55.576071 kubelet[1981]: I0212 22:00:55.576046 1981 scope.go:115] "RemoveContainer" containerID="7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c" Feb 12 22:00:55.582375 env[1564]: time="2024-02-12T22:00:55.582081900Z" level=info msg="RemoveContainer for \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\"" Feb 12 22:00:55.588837 env[1564]: time="2024-02-12T22:00:55.588791121Z" level=info msg="RemoveContainer for \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\" returns successfully" Feb 12 22:00:55.589233 kubelet[1981]: I0212 22:00:55.589199 1981 scope.go:115] "RemoveContainer" containerID="84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c" Feb 12 22:00:55.591138 env[1564]: time="2024-02-12T22:00:55.591096138Z" level=info msg="RemoveContainer for \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\"" Feb 12 22:00:55.596396 env[1564]: time="2024-02-12T22:00:55.596351788Z" level=info msg="RemoveContainer for \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\" returns successfully" Feb 12 22:00:55.596622 kubelet[1981]: I0212 22:00:55.596586 1981 scope.go:115] "RemoveContainer" containerID="93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07" Feb 12 22:00:55.598590 env[1564]: time="2024-02-12T22:00:55.598556283Z" level=info msg="RemoveContainer for \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\"" Feb 12 22:00:55.603734 env[1564]: time="2024-02-12T22:00:55.603505267Z" level=info msg="RemoveContainer for \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\" returns successfully" Feb 12 22:00:55.604083 kubelet[1981]: I0212 22:00:55.604055 1981 scope.go:115] "RemoveContainer" containerID="e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac" Feb 12 22:00:55.605567 env[1564]: time="2024-02-12T22:00:55.605533254Z" level=info msg="RemoveContainer for \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\"" Feb 12 22:00:55.611327 env[1564]: time="2024-02-12T22:00:55.611280835Z" level=info msg="RemoveContainer for \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\" returns successfully" Feb 12 22:00:55.611831 kubelet[1981]: I0212 22:00:55.611805 1981 scope.go:115] "RemoveContainer" containerID="ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba" Feb 12 22:00:55.612261 env[1564]: time="2024-02-12T22:00:55.612177690Z" level=error msg="ContainerStatus for \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\": not found" Feb 12 22:00:55.612593 kubelet[1981]: E0212 22:00:55.612484 1981 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\": not found" containerID="ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba" Feb 12 22:00:55.612685 kubelet[1981]: I0212 22:00:55.612626 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba} err="failed to get container status \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea7ab33d04dc724928eb26ceebcb9b311b3e97ab9c2346d37c3160a7d96d7bba\": not found" Feb 12 22:00:55.612685 kubelet[1981]: I0212 22:00:55.612643 1981 scope.go:115] "RemoveContainer" containerID="7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c" Feb 12 22:00:55.612932 env[1564]: time="2024-02-12T22:00:55.612871962Z" level=error msg="ContainerStatus for \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\": not found" Feb 12 22:00:55.613073 kubelet[1981]: E0212 22:00:55.613052 1981 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\": not found" containerID="7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c" Feb 12 22:00:55.613156 kubelet[1981]: I0212 22:00:55.613090 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c} err="failed to get container status \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a8bf84efbf28ad6cd21f11b36a54ccb20c1e5086df42ec2022f063a207fef4c\": not found" Feb 12 22:00:55.613156 kubelet[1981]: I0212 22:00:55.613103 1981 scope.go:115] "RemoveContainer" containerID="84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c" Feb 12 22:00:55.613563 env[1564]: time="2024-02-12T22:00:55.613436494Z" level=error msg="ContainerStatus for \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\": not found" Feb 12 22:00:55.613694 kubelet[1981]: E0212 22:00:55.613674 1981 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\": not found" containerID="84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c" Feb 12 22:00:55.613768 kubelet[1981]: I0212 22:00:55.613708 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c} err="failed to get container status \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"84f012531feb5484122b9f9710a840e92781680f8e4d408254bb0e2a9e68aa7c\": not found" Feb 12 22:00:55.613768 kubelet[1981]: I0212 22:00:55.613721 1981 scope.go:115] "RemoveContainer" containerID="93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07" Feb 12 22:00:55.614116 env[1564]: time="2024-02-12T22:00:55.613975780Z" level=error msg="ContainerStatus for \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\": not found" Feb 12 22:00:55.614306 kubelet[1981]: E0212 22:00:55.614237 1981 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\": not found" containerID="93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07" Feb 12 22:00:55.614379 kubelet[1981]: I0212 22:00:55.614324 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07} err="failed to get container status \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\": rpc error: code = NotFound desc = an error occurred when try to find container \"93c2f1c44614451ac0ad99e6a806abc86a6a53b6af2848b359264273d171ed07\": not found" Feb 12 22:00:55.614379 kubelet[1981]: I0212 22:00:55.614339 1981 scope.go:115] "RemoveContainer" containerID="e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac" Feb 12 22:00:55.614603 env[1564]: time="2024-02-12T22:00:55.614545706Z" level=error msg="ContainerStatus for \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\": not found" Feb 12 22:00:55.614723 kubelet[1981]: E0212 22:00:55.614703 1981 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\": not found" containerID="e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac" Feb 12 22:00:55.614805 kubelet[1981]: I0212 22:00:55.614735 1981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac} err="failed to get container status \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6a88514040565a625923520d41469001ce4be13c2538f1da51bb5f29e25deac\": not found" Feb 12 22:00:55.763750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec-rootfs.mount: Deactivated successfully. Feb 12 22:00:55.764017 systemd[1]: var-lib-kubelet-pods-41bca072\x2d1a8a\x2d48fa\x2d96f4\x2da1f3db33f2e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjvnb.mount: Deactivated successfully. Feb 12 22:00:55.764110 systemd[1]: var-lib-kubelet-pods-41bca072\x2d1a8a\x2d48fa\x2d96f4\x2da1f3db33f2e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 22:00:55.764197 systemd[1]: var-lib-kubelet-pods-41bca072\x2d1a8a\x2d48fa\x2d96f4\x2da1f3db33f2e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 22:00:56.154906 kubelet[1981]: E0212 22:00:56.154852 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:57.071348 kubelet[1981]: E0212 22:00:57.071302 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:57.155763 kubelet[1981]: E0212 22:00:57.155711 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:57.245908 kubelet[1981]: E0212 22:00:57.245870 1981 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 22:00:57.289017 kubelet[1981]: I0212 22:00:57.288920 1981 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=41bca072-1a8a-48fa-96f4-a1f3db33f2e9 path="/var/lib/kubelet/pods/41bca072-1a8a-48fa-96f4-a1f3db33f2e9/volumes" Feb 12 22:00:57.850440 kubelet[1981]: I0212 22:00:57.850361 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 22:00:57.851130 kubelet[1981]: E0212 22:00:57.851103 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41bca072-1a8a-48fa-96f4-a1f3db33f2e9" containerName="mount-cgroup" Feb 12 22:00:57.851412 kubelet[1981]: E0212 22:00:57.851398 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41bca072-1a8a-48fa-96f4-a1f3db33f2e9" containerName="apply-sysctl-overwrites" Feb 12 22:00:57.851629 kubelet[1981]: E0212 22:00:57.851590 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41bca072-1a8a-48fa-96f4-a1f3db33f2e9" containerName="clean-cilium-state" Feb 12 22:00:57.851736 kubelet[1981]: E0212 22:00:57.851726 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41bca072-1a8a-48fa-96f4-a1f3db33f2e9" containerName="cilium-agent" Feb 12 22:00:57.852087 kubelet[1981]: E0212 22:00:57.852072 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="41bca072-1a8a-48fa-96f4-a1f3db33f2e9" containerName="mount-bpf-fs" Feb 12 22:00:57.852248 kubelet[1981]: I0212 22:00:57.852237 1981 memory_manager.go:346] "RemoveStaleState removing state" podUID="41bca072-1a8a-48fa-96f4-a1f3db33f2e9" containerName="cilium-agent" Feb 12 22:00:57.853255 kubelet[1981]: I0212 22:00:57.853235 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 22:00:57.860989 systemd[1]: Created slice kubepods-burstable-pod2358a2d5_f612_4619_8d07_e26ae14402af.slice. Feb 12 22:00:57.872749 systemd[1]: Created slice kubepods-besteffort-pod49919738_7e91_48ac_be91_9221954ae9da.slice. Feb 12 22:00:57.927567 kubelet[1981]: I0212 22:00:57.926936 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-clustermesh-secrets\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.927907 kubelet[1981]: I0212 22:00:57.927646 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-etc-cni-netd\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.927907 kubelet[1981]: I0212 22:00:57.927846 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-lib-modules\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.927907 kubelet[1981]: I0212 22:00:57.927898 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-xtables-lock\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928075 kubelet[1981]: I0212 22:00:57.927948 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-run\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928075 kubelet[1981]: I0212 22:00:57.927985 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-cgroup\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928075 kubelet[1981]: I0212 22:00:57.928017 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-net\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928075 kubelet[1981]: I0212 22:00:57.928064 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-hubble-tls\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928248 kubelet[1981]: I0212 22:00:57.928116 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cni-path\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928248 kubelet[1981]: I0212 22:00:57.928155 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-ipsec-secrets\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928248 kubelet[1981]: I0212 22:00:57.928186 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-kernel\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928248 kubelet[1981]: I0212 22:00:57.928232 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hj5m\" (UniqueName: \"kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-kube-api-access-9hj5m\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928415 kubelet[1981]: I0212 22:00:57.928267 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49919738-7e91-48ac-be91-9221954ae9da-cilium-config-path\") pod \"cilium-operator-574c4bb98d-js7zm\" (UID: \"49919738-7e91-48ac-be91-9221954ae9da\") " pod="kube-system/cilium-operator-574c4bb98d-js7zm" Feb 12 22:00:57.928415 kubelet[1981]: I0212 22:00:57.928323 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-bpf-maps\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928415 kubelet[1981]: I0212 22:00:57.928365 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-hostproc\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928415 kubelet[1981]: I0212 22:00:57.928397 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-config-path\") pod \"cilium-6xzgm\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " pod="kube-system/cilium-6xzgm" Feb 12 22:00:57.928621 kubelet[1981]: I0212 22:00:57.928454 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2tjn\" (UniqueName: \"kubernetes.io/projected/49919738-7e91-48ac-be91-9221954ae9da-kube-api-access-k2tjn\") pod \"cilium-operator-574c4bb98d-js7zm\" (UID: \"49919738-7e91-48ac-be91-9221954ae9da\") " pod="kube-system/cilium-operator-574c4bb98d-js7zm" Feb 12 22:00:58.157957 kubelet[1981]: E0212 22:00:58.157905 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:58.172551 env[1564]: time="2024-02-12T22:00:58.172368909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xzgm,Uid:2358a2d5-f612-4619-8d07-e26ae14402af,Namespace:kube-system,Attempt:0,}" Feb 12 22:00:58.178143 env[1564]: time="2024-02-12T22:00:58.177157819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-js7zm,Uid:49919738-7e91-48ac-be91-9221954ae9da,Namespace:kube-system,Attempt:0,}" Feb 12 22:00:58.210704 env[1564]: time="2024-02-12T22:00:58.209512554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 22:00:58.210704 env[1564]: time="2024-02-12T22:00:58.209558569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 22:00:58.210704 env[1564]: time="2024-02-12T22:00:58.209574342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 22:00:58.210704 env[1564]: time="2024-02-12T22:00:58.209725183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62 pid=3713 runtime=io.containerd.runc.v2 Feb 12 22:00:58.227560 env[1564]: time="2024-02-12T22:00:58.227362386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 22:00:58.227836 env[1564]: time="2024-02-12T22:00:58.227787680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 22:00:58.228102 env[1564]: time="2024-02-12T22:00:58.228071960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 22:00:58.228874 env[1564]: time="2024-02-12T22:00:58.228673494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9910e288c2c5ff8b4eb4dd6a4417ea91cfc5cb35784668c44ac43b9666baf2a pid=3736 runtime=io.containerd.runc.v2 Feb 12 22:00:58.237264 systemd[1]: Started cri-containerd-10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62.scope. Feb 12 22:00:58.270208 systemd[1]: Started cri-containerd-f9910e288c2c5ff8b4eb4dd6a4417ea91cfc5cb35784668c44ac43b9666baf2a.scope. Feb 12 22:00:58.307419 env[1564]: time="2024-02-12T22:00:58.307368046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xzgm,Uid:2358a2d5-f612-4619-8d07-e26ae14402af,Namespace:kube-system,Attempt:0,} returns sandbox id \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\"" Feb 12 22:00:58.314126 env[1564]: time="2024-02-12T22:00:58.314067437Z" level=info msg="CreateContainer within sandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 22:00:58.346781 env[1564]: time="2024-02-12T22:00:58.346717174Z" level=info msg="CreateContainer within sandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\"" Feb 12 22:00:58.347772 env[1564]: time="2024-02-12T22:00:58.347740674Z" level=info msg="StartContainer for \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\"" Feb 12 22:00:58.363472 env[1564]: time="2024-02-12T22:00:58.363198664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-js7zm,Uid:49919738-7e91-48ac-be91-9221954ae9da,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9910e288c2c5ff8b4eb4dd6a4417ea91cfc5cb35784668c44ac43b9666baf2a\"" Feb 12 22:00:58.366202 env[1564]: time="2024-02-12T22:00:58.366146760Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 22:00:58.385402 systemd[1]: Started cri-containerd-17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9.scope. Feb 12 22:00:58.409671 systemd[1]: cri-containerd-17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9.scope: Deactivated successfully. Feb 12 22:00:58.437548 env[1564]: time="2024-02-12T22:00:58.437490151Z" level=info msg="shim disconnected" id=17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9 Feb 12 22:00:58.437548 env[1564]: time="2024-02-12T22:00:58.437550389Z" level=warning msg="cleaning up after shim disconnected" id=17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9 namespace=k8s.io Feb 12 22:00:58.437879 env[1564]: time="2024-02-12T22:00:58.437562579Z" level=info msg="cleaning up dead shim" Feb 12 22:00:58.448084 env[1564]: time="2024-02-12T22:00:58.448041387Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T22:00:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 22:00:58.448566 env[1564]: time="2024-02-12T22:00:58.448453486Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Feb 12 22:00:58.448907 env[1564]: time="2024-02-12T22:00:58.448852426Z" level=error msg="Failed to pipe stderr of container \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\"" error="reading from a closed fifo" Feb 12 22:00:58.449151 env[1564]: time="2024-02-12T22:00:58.448860944Z" level=error msg="Failed to pipe stdout of container \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\"" error="reading from a closed fifo" Feb 12 22:00:58.451054 env[1564]: time="2024-02-12T22:00:58.450989284Z" level=error msg="StartContainer for \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 22:00:58.451419 kubelet[1981]: E0212 22:00:58.451393 1981 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9" Feb 12 22:00:58.451582 kubelet[1981]: E0212 22:00:58.451562 1981 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 22:00:58.451582 kubelet[1981]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 22:00:58.451582 kubelet[1981]: rm /hostbin/cilium-mount Feb 12 22:00:58.452056 kubelet[1981]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9hj5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-6xzgm_kube-system(2358a2d5-f612-4619-8d07-e26ae14402af): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 22:00:58.452056 kubelet[1981]: E0212 22:00:58.451693 1981 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6xzgm" podUID=2358a2d5-f612-4619-8d07-e26ae14402af Feb 12 22:00:58.572581 env[1564]: time="2024-02-12T22:00:58.572539128Z" level=info msg="StopPodSandbox for \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\"" Feb 12 22:00:58.572767 env[1564]: time="2024-02-12T22:00:58.572604792Z" level=info msg="Container to stop \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 22:00:58.580732 systemd[1]: cri-containerd-10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62.scope: Deactivated successfully. Feb 12 22:00:58.620152 env[1564]: time="2024-02-12T22:00:58.620106715Z" level=info msg="shim disconnected" id=10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62 Feb 12 22:00:58.620442 env[1564]: time="2024-02-12T22:00:58.620403037Z" level=warning msg="cleaning up after shim disconnected" id=10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62 namespace=k8s.io Feb 12 22:00:58.620526 env[1564]: time="2024-02-12T22:00:58.620424011Z" level=info msg="cleaning up dead shim" Feb 12 22:00:58.631340 env[1564]: time="2024-02-12T22:00:58.631181819Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:00:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3847 runtime=io.containerd.runc.v2\n" Feb 12 22:00:58.631887 env[1564]: time="2024-02-12T22:00:58.631849809Z" level=info msg="TearDown network for sandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" successfully" Feb 12 22:00:58.631887 env[1564]: time="2024-02-12T22:00:58.631882831Z" level=info msg="StopPodSandbox for \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" returns successfully" Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734021 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734065 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-etc-cni-netd\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734110 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-net\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734147 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cni-path\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734172 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hj5m\" (UniqueName: \"kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-kube-api-access-9hj5m\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734192 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-bpf-maps\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734218 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-hostproc\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734241 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-ipsec-secrets\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734259 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-clustermesh-secrets\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734274 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-cgroup\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734302 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-run\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734319 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-kernel\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734336 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-xtables-lock\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734406 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-hubble-tls\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734446 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-config-path\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.736913 kubelet[1981]: I0212 22:00:58.734468 1981 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-lib-modules\") pod \"2358a2d5-f612-4619-8d07-e26ae14402af\" (UID: \"2358a2d5-f612-4619-8d07-e26ae14402af\") " Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.734498 1981 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-etc-cni-netd\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.734525 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.734549 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.734564 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cni-path" (OuterVolumeSpecName: "cni-path") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.734891 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.735051 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.735073 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-hostproc" (OuterVolumeSpecName: "hostproc") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.735284 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.735313 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: I0212 22:00:58.735360 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 22:00:58.738470 kubelet[1981]: W0212 22:00:58.736079 1981 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2358a2d5-f612-4619-8d07-e26ae14402af/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 22:00:58.740785 kubelet[1981]: I0212 22:00:58.740739 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 22:00:58.743473 kubelet[1981]: I0212 22:00:58.743444 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-kube-api-access-9hj5m" (OuterVolumeSpecName: "kube-api-access-9hj5m") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "kube-api-access-9hj5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 22:00:58.746234 kubelet[1981]: I0212 22:00:58.746203 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 22:00:58.746841 kubelet[1981]: I0212 22:00:58.746814 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 22:00:58.748794 kubelet[1981]: I0212 22:00:58.748756 1981 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2358a2d5-f612-4619-8d07-e26ae14402af" (UID: "2358a2d5-f612-4619-8d07-e26ae14402af"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 22:00:58.835762 kubelet[1981]: I0212 22:00:58.835720 1981 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9hj5m\" (UniqueName: \"kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-kube-api-access-9hj5m\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.835762 kubelet[1981]: I0212 22:00:58.835759 1981 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-bpf-maps\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.835762 kubelet[1981]: I0212 22:00:58.835773 1981 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-hostproc\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835788 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-net\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835801 1981 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cni-path\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835814 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-run\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835826 1981 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-host-proc-sys-kernel\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835841 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-ipsec-secrets\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835855 1981 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2358a2d5-f612-4619-8d07-e26ae14402af-clustermesh-secrets\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835868 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-cgroup\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835880 1981 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2358a2d5-f612-4619-8d07-e26ae14402af-cilium-config-path\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835893 1981 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-xtables-lock\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835908 1981 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2358a2d5-f612-4619-8d07-e26ae14402af-hubble-tls\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:58.836036 kubelet[1981]: I0212 22:00:58.835920 1981 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2358a2d5-f612-4619-8d07-e26ae14402af-lib-modules\") on node \"172.31.16.81\" DevicePath \"\"" Feb 12 22:00:59.052242 systemd[1]: var-lib-kubelet-pods-2358a2d5\x2df612\x2d4619\x2d8d07\x2de26ae14402af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9hj5m.mount: Deactivated successfully. Feb 12 22:00:59.052355 systemd[1]: var-lib-kubelet-pods-2358a2d5\x2df612\x2d4619\x2d8d07\x2de26ae14402af-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 22:00:59.052459 systemd[1]: var-lib-kubelet-pods-2358a2d5\x2df612\x2d4619\x2d8d07\x2de26ae14402af-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 22:00:59.052548 systemd[1]: var-lib-kubelet-pods-2358a2d5\x2df612\x2d4619\x2d8d07\x2de26ae14402af-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 22:00:59.158955 kubelet[1981]: E0212 22:00:59.158906 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:00:59.199518 amazon-ssm-agent[1539]: 2024-02-12 22:00:59 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 22:00:59.292508 systemd[1]: Removed slice kubepods-burstable-pod2358a2d5_f612_4619_8d07_e26ae14402af.slice. Feb 12 22:00:59.587734 kubelet[1981]: I0212 22:00:59.587703 1981 scope.go:115] "RemoveContainer" containerID="17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9" Feb 12 22:00:59.594950 env[1564]: time="2024-02-12T22:00:59.594832999Z" level=info msg="RemoveContainer for \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\"" Feb 12 22:00:59.603151 env[1564]: time="2024-02-12T22:00:59.603102329Z" level=info msg="RemoveContainer for \"17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9\" returns successfully" Feb 12 22:00:59.653349 kubelet[1981]: I0212 22:00:59.652592 1981 topology_manager.go:212] "Topology Admit Handler" Feb 12 22:00:59.653349 kubelet[1981]: E0212 22:00:59.652735 1981 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2358a2d5-f612-4619-8d07-e26ae14402af" containerName="mount-cgroup" Feb 12 22:00:59.653349 kubelet[1981]: I0212 22:00:59.652792 1981 memory_manager.go:346] "RemoveStaleState removing state" podUID="2358a2d5-f612-4619-8d07-e26ae14402af" containerName="mount-cgroup" Feb 12 22:00:59.667383 systemd[1]: Created slice kubepods-burstable-podb339fa49_0540_47f3_8e48_4f7c58d95682.slice. Feb 12 22:00:59.742501 kubelet[1981]: I0212 22:00:59.742443 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wlr6\" (UniqueName: \"kubernetes.io/projected/b339fa49-0540-47f3-8e48-4f7c58d95682-kube-api-access-9wlr6\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.742501 kubelet[1981]: I0212 22:00:59.742546 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-cilium-cgroup\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.742768 kubelet[1981]: I0212 22:00:59.742579 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b339fa49-0540-47f3-8e48-4f7c58d95682-cilium-ipsec-secrets\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.742768 kubelet[1981]: I0212 22:00:59.742623 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-etc-cni-netd\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.742768 kubelet[1981]: I0212 22:00:59.742650 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-xtables-lock\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.742768 kubelet[1981]: I0212 22:00:59.742704 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b339fa49-0540-47f3-8e48-4f7c58d95682-clustermesh-secrets\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.742768 kubelet[1981]: I0212 22:00:59.742769 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-host-proc-sys-net\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.742799 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-bpf-maps\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.742846 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-cni-path\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.742878 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b339fa49-0540-47f3-8e48-4f7c58d95682-hubble-tls\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.742947 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-host-proc-sys-kernel\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.742999 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-lib-modules\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.743127 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b339fa49-0540-47f3-8e48-4f7c58d95682-cilium-config-path\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.743185 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-cilium-run\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.743245 kubelet[1981]: I0212 22:00:59.743235 1981 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b339fa49-0540-47f3-8e48-4f7c58d95682-hostproc\") pod \"cilium-8jm5x\" (UID: \"b339fa49-0540-47f3-8e48-4f7c58d95682\") " pod="kube-system/cilium-8jm5x" Feb 12 22:00:59.778127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290842602.mount: Deactivated successfully. Feb 12 22:00:59.979919 env[1564]: time="2024-02-12T22:00:59.979868920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jm5x,Uid:b339fa49-0540-47f3-8e48-4f7c58d95682,Namespace:kube-system,Attempt:0,}" Feb 12 22:01:00.068111 env[1564]: time="2024-02-12T22:01:00.068008315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 22:01:00.068783 env[1564]: time="2024-02-12T22:01:00.068084991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 22:01:00.068783 env[1564]: time="2024-02-12T22:01:00.068101976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 22:01:00.068783 env[1564]: time="2024-02-12T22:01:00.068404141Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa pid=3876 runtime=io.containerd.runc.v2 Feb 12 22:01:00.114012 systemd[1]: Started cri-containerd-771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa.scope. Feb 12 22:01:00.127164 systemd[1]: run-containerd-runc-k8s.io-771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa-runc.YAD0ma.mount: Deactivated successfully. Feb 12 22:01:00.159770 kubelet[1981]: E0212 22:01:00.159662 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:00.171942 env[1564]: time="2024-02-12T22:01:00.171886682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jm5x,Uid:b339fa49-0540-47f3-8e48-4f7c58d95682,Namespace:kube-system,Attempt:0,} returns sandbox id \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\"" Feb 12 22:01:00.175161 env[1564]: time="2024-02-12T22:01:00.175112062Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 22:01:00.209981 env[1564]: time="2024-02-12T22:01:00.209920664Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e\"" Feb 12 22:01:00.211315 env[1564]: time="2024-02-12T22:01:00.211196974Z" level=info msg="StartContainer for \"51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e\"" Feb 12 22:01:00.258396 systemd[1]: Started cri-containerd-51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e.scope. Feb 12 22:01:00.315697 env[1564]: time="2024-02-12T22:01:00.315638731Z" level=info msg="StartContainer for \"51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e\" returns successfully" Feb 12 22:01:00.344678 systemd[1]: cri-containerd-51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e.scope: Deactivated successfully. Feb 12 22:01:00.451788 env[1564]: time="2024-02-12T22:01:00.451664801Z" level=info msg="shim disconnected" id=51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e Feb 12 22:01:00.452200 env[1564]: time="2024-02-12T22:01:00.452173641Z" level=warning msg="cleaning up after shim disconnected" id=51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e namespace=k8s.io Feb 12 22:01:00.452323 env[1564]: time="2024-02-12T22:01:00.452305591Z" level=info msg="cleaning up dead shim" Feb 12 22:01:00.468005 env[1564]: time="2024-02-12T22:01:00.467951340Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" Feb 12 22:01:00.583509 kubelet[1981]: I0212 22:01:00.581195 1981 setters.go:548] "Node became not ready" node="172.31.16.81" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 22:01:00.581121304 +0000 UTC m=+83.882379578 LastTransitionTime:2024-02-12 22:01:00.581121304 +0000 UTC m=+83.882379578 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 22:01:00.596812 env[1564]: time="2024-02-12T22:01:00.596739229Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 22:01:00.631202 env[1564]: time="2024-02-12T22:01:00.631133820Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd\"" Feb 12 22:01:00.632238 env[1564]: time="2024-02-12T22:01:00.632200819Z" level=info msg="StartContainer for \"eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd\"" Feb 12 22:01:00.692524 systemd[1]: Started cri-containerd-eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd.scope. Feb 12 22:01:00.748934 env[1564]: time="2024-02-12T22:01:00.748878873Z" level=info msg="StartContainer for \"eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd\" returns successfully" Feb 12 22:01:00.765032 systemd[1]: cri-containerd-eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd.scope: Deactivated successfully. Feb 12 22:01:00.864551 env[1564]: time="2024-02-12T22:01:00.863873220Z" level=info msg="shim disconnected" id=eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd Feb 12 22:01:00.864880 env[1564]: time="2024-02-12T22:01:00.864853609Z" level=warning msg="cleaning up after shim disconnected" id=eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd namespace=k8s.io Feb 12 22:01:00.865008 env[1564]: time="2024-02-12T22:01:00.864991761Z" level=info msg="cleaning up dead shim" Feb 12 22:01:00.887625 env[1564]: time="2024-02-12T22:01:00.887578283Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4026 runtime=io.containerd.runc.v2\n" Feb 12 22:01:01.049887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118961038.mount: Deactivated successfully. Feb 12 22:01:01.159992 kubelet[1981]: E0212 22:01:01.159905 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:01.180405 env[1564]: time="2024-02-12T22:01:01.180347101Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:01:01.183533 env[1564]: time="2024-02-12T22:01:01.183486530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:01:01.186583 env[1564]: time="2024-02-12T22:01:01.186534559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 22:01:01.187288 env[1564]: time="2024-02-12T22:01:01.187171365Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 22:01:01.190010 env[1564]: time="2024-02-12T22:01:01.189963485Z" level=info msg="CreateContainer within sandbox \"f9910e288c2c5ff8b4eb4dd6a4417ea91cfc5cb35784668c44ac43b9666baf2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 22:01:01.218619 env[1564]: time="2024-02-12T22:01:01.218526795Z" level=info msg="CreateContainer within sandbox \"f9910e288c2c5ff8b4eb4dd6a4417ea91cfc5cb35784668c44ac43b9666baf2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a5ed006439a3864c0b5820edbc2fba45f1e0719d7309087f2acb9fc229d9e530\"" Feb 12 22:01:01.219778 env[1564]: time="2024-02-12T22:01:01.219664307Z" level=info msg="StartContainer for \"a5ed006439a3864c0b5820edbc2fba45f1e0719d7309087f2acb9fc229d9e530\"" Feb 12 22:01:01.263049 systemd[1]: Started cri-containerd-a5ed006439a3864c0b5820edbc2fba45f1e0719d7309087f2acb9fc229d9e530.scope. Feb 12 22:01:01.294158 kubelet[1981]: I0212 22:01:01.294090 1981 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2358a2d5-f612-4619-8d07-e26ae14402af path="/var/lib/kubelet/pods/2358a2d5-f612-4619-8d07-e26ae14402af/volumes" Feb 12 22:01:01.315083 env[1564]: time="2024-02-12T22:01:01.314917571Z" level=info msg="StartContainer for \"a5ed006439a3864c0b5820edbc2fba45f1e0719d7309087f2acb9fc229d9e530\" returns successfully" Feb 12 22:01:01.546185 kubelet[1981]: W0212 22:01:01.546043 1981 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2358a2d5_f612_4619_8d07_e26ae14402af.slice/cri-containerd-17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9.scope WatchSource:0}: container "17588772cf5a89e9d579d763f8e20f4263e2edfd4921b326d7e66f4dc363a3e9" in namespace "k8s.io": not found Feb 12 22:01:01.617328 env[1564]: time="2024-02-12T22:01:01.617268208Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 22:01:01.662234 kubelet[1981]: I0212 22:01:01.662176 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-js7zm" podStartSLOduration=1.839582466 podCreationTimestamp="2024-02-12 22:00:57 +0000 UTC" firstStartedPulling="2024-02-12 22:00:58.36517102 +0000 UTC m=+81.666429302" lastFinishedPulling="2024-02-12 22:01:01.187713692 +0000 UTC m=+84.488971979" observedRunningTime="2024-02-12 22:01:01.661785753 +0000 UTC m=+84.963044045" watchObservedRunningTime="2024-02-12 22:01:01.662125143 +0000 UTC m=+84.963383435" Feb 12 22:01:01.691203 env[1564]: time="2024-02-12T22:01:01.691138722Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e\"" Feb 12 22:01:01.692391 env[1564]: time="2024-02-12T22:01:01.692349872Z" level=info msg="StartContainer for \"262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e\"" Feb 12 22:01:01.766010 systemd[1]: Started cri-containerd-262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e.scope. Feb 12 22:01:02.035230 env[1564]: time="2024-02-12T22:01:02.035174030Z" level=info msg="StartContainer for \"262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e\" returns successfully" Feb 12 22:01:02.083835 systemd[1]: cri-containerd-262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e.scope: Deactivated successfully. Feb 12 22:01:02.168294 kubelet[1981]: E0212 22:01:02.168230 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:02.195360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e-rootfs.mount: Deactivated successfully. Feb 12 22:01:02.223833 env[1564]: time="2024-02-12T22:01:02.223776508Z" level=info msg="shim disconnected" id=262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e Feb 12 22:01:02.226614 env[1564]: time="2024-02-12T22:01:02.226562198Z" level=warning msg="cleaning up after shim disconnected" id=262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e namespace=k8s.io Feb 12 22:01:02.226815 env[1564]: time="2024-02-12T22:01:02.226797464Z" level=info msg="cleaning up dead shim" Feb 12 22:01:02.247462 kubelet[1981]: E0212 22:01:02.247374 1981 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 22:01:02.255856 env[1564]: time="2024-02-12T22:01:02.255803635Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T22:01:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 12 22:01:02.627269 env[1564]: time="2024-02-12T22:01:02.627217037Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 22:01:02.657720 env[1564]: time="2024-02-12T22:01:02.657657171Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137\"" Feb 12 22:01:02.659528 env[1564]: time="2024-02-12T22:01:02.659487306Z" level=info msg="StartContainer for \"efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137\"" Feb 12 22:01:02.758093 systemd[1]: Started cri-containerd-efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137.scope. Feb 12 22:01:02.879120 env[1564]: time="2024-02-12T22:01:02.877327566Z" level=info msg="StartContainer for \"efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137\" returns successfully" Feb 12 22:01:02.895993 systemd[1]: cri-containerd-efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137.scope: Deactivated successfully. Feb 12 22:01:02.963872 env[1564]: time="2024-02-12T22:01:02.963817694Z" level=info msg="shim disconnected" id=efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137 Feb 12 22:01:02.964129 env[1564]: time="2024-02-12T22:01:02.963975580Z" level=warning msg="cleaning up after shim disconnected" id=efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137 namespace=k8s.io Feb 12 22:01:02.964129 env[1564]: time="2024-02-12T22:01:02.963998646Z" level=info msg="cleaning up dead shim" Feb 12 22:01:02.993860 env[1564]: time="2024-02-12T22:01:02.993808588Z" level=warning msg="cleanup warnings time=\"2024-02-12T22:01:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\n" Feb 12 22:01:03.054784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137-rootfs.mount: Deactivated successfully. Feb 12 22:01:03.168476 kubelet[1981]: E0212 22:01:03.168394 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:03.629645 env[1564]: time="2024-02-12T22:01:03.629295248Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 22:01:03.656202 env[1564]: time="2024-02-12T22:01:03.656146135Z" level=info msg="CreateContainer within sandbox \"771ad9d77e90c84bba95916c459ba4813390d8fa5fc3b9fa722ff651152b19fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52\"" Feb 12 22:01:03.656836 env[1564]: time="2024-02-12T22:01:03.656788791Z" level=info msg="StartContainer for \"995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52\"" Feb 12 22:01:03.688726 systemd[1]: Started cri-containerd-995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52.scope. Feb 12 22:01:03.738481 env[1564]: time="2024-02-12T22:01:03.737384363Z" level=info msg="StartContainer for \"995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52\" returns successfully" Feb 12 22:01:04.169748 kubelet[1981]: E0212 22:01:04.169682 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:04.479464 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 22:01:04.682590 kubelet[1981]: W0212 22:01:04.682546 1981 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb339fa49_0540_47f3_8e48_4f7c58d95682.slice/cri-containerd-51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e.scope WatchSource:0}: task 51622a53c6ef87d19b31b86baf578b1098cac66e9963cd9481f32171715ca04e not found: not found Feb 12 22:01:04.691131 kubelet[1981]: I0212 22:01:04.690208 1981 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8jm5x" podStartSLOduration=5.69014469 podCreationTimestamp="2024-02-12 22:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 22:01:04.689971307 +0000 UTC m=+87.991229600" watchObservedRunningTime="2024-02-12 22:01:04.69014469 +0000 UTC m=+87.991402984" Feb 12 22:01:05.171442 kubelet[1981]: E0212 22:01:05.171081 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:06.175773 kubelet[1981]: E0212 22:01:06.175733 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:07.177091 kubelet[1981]: E0212 22:01:07.177056 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:07.798830 kubelet[1981]: W0212 22:01:07.798712 1981 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb339fa49_0540_47f3_8e48_4f7c58d95682.slice/cri-containerd-eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd.scope WatchSource:0}: task eff91aef212e1111e9febb40982a0f1757fc81d3089b310390ed88fe3e5e77dd not found: not found Feb 12 22:01:07.831639 systemd-networkd[1375]: lxc_health: Link UP Feb 12 22:01:07.836266 systemd-networkd[1375]: lxc_health: Gained carrier Feb 12 22:01:07.836481 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 22:01:07.856030 (udev-worker)[4758]: Network interface NamePolicy= disabled on kernel command line. Feb 12 22:01:08.178110 kubelet[1981]: E0212 22:01:08.178001 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:09.195666 kubelet[1981]: E0212 22:01:09.179082 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:09.196643 systemd-networkd[1375]: lxc_health: Gained IPv6LL Feb 12 22:01:09.684896 systemd[1]: run-containerd-runc-k8s.io-995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52-runc.aFfo3O.mount: Deactivated successfully. Feb 12 22:01:10.180051 kubelet[1981]: E0212 22:01:10.179978 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:10.921442 kubelet[1981]: W0212 22:01:10.921380 1981 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb339fa49_0540_47f3_8e48_4f7c58d95682.slice/cri-containerd-262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e.scope WatchSource:0}: task 262ab9a1b6b7576ab395eaa12fdeafec07468053c69341feb8a85abdb798829e not found: not found Feb 12 22:01:11.180643 kubelet[1981]: E0212 22:01:11.180506 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:11.981020 systemd[1]: run-containerd-runc-k8s.io-995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52-runc.6ggHhF.mount: Deactivated successfully. Feb 12 22:01:12.180953 kubelet[1981]: E0212 22:01:12.180842 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:13.181760 kubelet[1981]: E0212 22:01:13.181720 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:14.037139 kubelet[1981]: W0212 22:01:14.037091 1981 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb339fa49_0540_47f3_8e48_4f7c58d95682.slice/cri-containerd-efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137.scope WatchSource:0}: task efbed9725814ec58e30b3f9dc7e8674c64fcf9a3c85ba540e8e8c2da1f129137 not found: not found Feb 12 22:01:14.183632 kubelet[1981]: E0212 22:01:14.183595 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:14.306671 systemd[1]: run-containerd-runc-k8s.io-995d677a470b6276e5a8a560121e6d5bd2eb68cfa732032ad59a977d2a6ecd52-runc.IK1iTS.mount: Deactivated successfully. Feb 12 22:01:15.185358 kubelet[1981]: E0212 22:01:15.185035 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:16.186516 kubelet[1981]: E0212 22:01:16.186471 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:17.070763 kubelet[1981]: E0212 22:01:17.070710 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:17.187463 kubelet[1981]: E0212 22:01:17.187394 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:18.187611 kubelet[1981]: E0212 22:01:18.187566 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:19.188286 kubelet[1981]: E0212 22:01:19.188242 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:20.188443 kubelet[1981]: E0212 22:01:20.188374 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:21.189058 kubelet[1981]: E0212 22:01:21.189005 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:22.189639 kubelet[1981]: E0212 22:01:22.189586 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:23.190467 kubelet[1981]: E0212 22:01:23.190414 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:24.191345 kubelet[1981]: E0212 22:01:24.191297 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:25.191680 kubelet[1981]: E0212 22:01:25.191634 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:26.192229 kubelet[1981]: E0212 22:01:26.192179 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:27.193209 kubelet[1981]: E0212 22:01:27.193154 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:28.194002 kubelet[1981]: E0212 22:01:28.193951 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:29.194813 kubelet[1981]: E0212 22:01:29.194769 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:30.195857 kubelet[1981]: E0212 22:01:30.195803 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:30.590930 kubelet[1981]: E0212 22:01:30.590655 1981 controller.go:193] "Failed to update lease" err="Put \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 22:01:31.196599 kubelet[1981]: E0212 22:01:31.196540 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:32.197626 kubelet[1981]: E0212 22:01:32.197573 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:33.198399 kubelet[1981]: E0212 22:01:33.198348 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:34.199161 kubelet[1981]: E0212 22:01:34.199112 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:35.199342 kubelet[1981]: E0212 22:01:35.199296 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:36.199547 kubelet[1981]: E0212 22:01:36.199494 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:37.070858 kubelet[1981]: E0212 22:01:37.070800 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:37.096063 env[1564]: time="2024-02-12T22:01:37.095639396Z" level=info msg="StopPodSandbox for \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\"" Feb 12 22:01:37.096063 env[1564]: time="2024-02-12T22:01:37.095793451Z" level=info msg="TearDown network for sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" successfully" Feb 12 22:01:37.096063 env[1564]: time="2024-02-12T22:01:37.095837272Z" level=info msg="StopPodSandbox for \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" returns successfully" Feb 12 22:01:37.097135 env[1564]: time="2024-02-12T22:01:37.097101510Z" level=info msg="RemovePodSandbox for \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\"" Feb 12 22:01:37.097253 env[1564]: time="2024-02-12T22:01:37.097136103Z" level=info msg="Forcibly stopping sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\"" Feb 12 22:01:37.097253 env[1564]: time="2024-02-12T22:01:37.097225774Z" level=info msg="TearDown network for sandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" successfully" Feb 12 22:01:37.103667 env[1564]: time="2024-02-12T22:01:37.103624690Z" level=info msg="RemovePodSandbox \"b49bc04e794f3aa11208b1ec5881830464ff7b090f20f252fe997f13f5c80eec\" returns successfully" Feb 12 22:01:37.104140 env[1564]: time="2024-02-12T22:01:37.104105323Z" level=info msg="StopPodSandbox for \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\"" Feb 12 22:01:37.104240 env[1564]: time="2024-02-12T22:01:37.104191120Z" level=info msg="TearDown network for sandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" successfully" Feb 12 22:01:37.104287 env[1564]: time="2024-02-12T22:01:37.104233932Z" level=info msg="StopPodSandbox for \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" returns successfully" Feb 12 22:01:37.104731 env[1564]: time="2024-02-12T22:01:37.104699645Z" level=info msg="RemovePodSandbox for \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\"" Feb 12 22:01:37.104823 env[1564]: time="2024-02-12T22:01:37.104734032Z" level=info msg="Forcibly stopping sandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\"" Feb 12 22:01:37.104877 env[1564]: time="2024-02-12T22:01:37.104822584Z" level=info msg="TearDown network for sandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" successfully" Feb 12 22:01:37.108697 env[1564]: time="2024-02-12T22:01:37.108662471Z" level=info msg="RemovePodSandbox \"10f4427e6f09c0d6a1a21a8b505b3324104231cbcc93e0c7ee05dfffa53cea62\" returns successfully" Feb 12 22:01:37.200290 kubelet[1981]: E0212 22:01:37.200240 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:38.200767 kubelet[1981]: E0212 22:01:38.200713 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:39.201924 kubelet[1981]: E0212 22:01:39.201874 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:40.202928 kubelet[1981]: E0212 22:01:40.202878 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:40.591546 kubelet[1981]: E0212 22:01:40.591303 1981 controller.go:193] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.16.81)" Feb 12 22:01:41.203828 kubelet[1981]: E0212 22:01:41.203778 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:42.205082 kubelet[1981]: E0212 22:01:42.205033 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:43.206156 kubelet[1981]: E0212 22:01:43.206104 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:44.206893 kubelet[1981]: E0212 22:01:44.206846 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:45.207492 kubelet[1981]: E0212 22:01:45.207444 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:45.511606 kubelet[1981]: E0212 22:01:45.509485 1981 controller.go:193] "Failed to update lease" err="Put \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": unexpected EOF" Feb 12 22:01:45.522367 kubelet[1981]: E0212 22:01:45.522077 1981 controller.go:193] "Failed to update lease" err="Put \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": read tcp 172.31.16.81:33322->172.31.21.40:6443: read: connection reset by peer" Feb 12 22:01:45.522993 kubelet[1981]: E0212 22:01:45.522971 1981 controller.go:193] "Failed to update lease" err="Put \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" Feb 12 22:01:45.523118 kubelet[1981]: I0212 22:01:45.522999 1981 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 12 22:01:45.523609 kubelet[1981]: E0212 22:01:45.523578 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="200ms" Feb 12 22:01:45.725134 kubelet[1981]: E0212 22:01:45.725097 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="400ms" Feb 12 22:01:46.126188 kubelet[1981]: E0212 22:01:46.126152 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": dial tcp 172.31.21.40:6443: connect: connection refused" interval="800ms" Feb 12 22:01:46.208154 kubelet[1981]: E0212 22:01:46.208115 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:47.208740 kubelet[1981]: E0212 22:01:47.208539 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:48.209085 kubelet[1981]: E0212 22:01:48.209008 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:49.209975 kubelet[1981]: E0212 22:01:49.209927 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:50.211177 kubelet[1981]: E0212 22:01:50.211075 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:51.212296 kubelet[1981]: E0212 22:01:51.212130 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:52.213092 kubelet[1981]: E0212 22:01:52.213041 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:53.213745 kubelet[1981]: E0212 22:01:53.213696 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:54.214473 kubelet[1981]: E0212 22:01:54.214414 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:55.215019 kubelet[1981]: E0212 22:01:55.214968 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:56.215460 kubelet[1981]: E0212 22:01:56.215409 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:56.928019 kubelet[1981]: E0212 22:01:56.927974 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 12 22:01:57.071191 kubelet[1981]: E0212 22:01:57.071140 1981 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:57.216327 kubelet[1981]: E0212 22:01:57.216198 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:58.216981 kubelet[1981]: E0212 22:01:58.216928 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:01:59.219576 kubelet[1981]: E0212 22:01:59.219524 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:00.219937 kubelet[1981]: E0212 22:02:00.219882 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:01.221040 kubelet[1981]: E0212 22:02:01.220982 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:01.588920 kubelet[1981]: E0212 22:02:01.587904 1981 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.16.81\": Get \"https://172.31.21.40:6443/api/v1/nodes/172.31.16.81?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 12 22:02:02.222183 kubelet[1981]: E0212 22:02:02.221999 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:03.223021 kubelet[1981]: E0212 22:02:03.222965 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:04.224077 kubelet[1981]: E0212 22:02:04.224022 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:05.224629 kubelet[1981]: E0212 22:02:05.224576 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:06.225171 kubelet[1981]: E0212 22:02:06.225125 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:07.225812 kubelet[1981]: E0212 22:02:07.225761 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:08.226577 kubelet[1981]: E0212 22:02:08.226506 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:08.530159 kubelet[1981]: E0212 22:02:08.529961 1981 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.81?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 12 22:02:09.226869 kubelet[1981]: E0212 22:02:09.226815 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:10.227628 kubelet[1981]: E0212 22:02:10.227582 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 22:02:11.228872 kubelet[1981]: E0212 22:02:11.228825 1981 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"