Feb 9 19:02:27.221903 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:02:27.221937 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:02:27.221952 kernel: BIOS-provided physical RAM map: Feb 9 19:02:27.221963 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:02:27.221974 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:02:27.221985 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:02:27.222002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 9 19:02:27.222014 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 9 19:02:27.222025 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 9 19:02:27.222037 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:02:27.222049 kernel: NX (Execute Disable) protection: active Feb 9 19:02:27.222060 kernel: SMBIOS 2.7 present. Feb 9 19:02:27.222072 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 9 19:02:27.222084 kernel: Hypervisor detected: KVM Feb 9 19:02:27.222102 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:02:27.222115 kernel: kvm-clock: cpu 0, msr 11faa001, primary cpu clock Feb 9 19:02:27.222128 kernel: kvm-clock: using sched offset of 6878269329 cycles Feb 9 19:02:27.222141 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:02:27.222154 kernel: tsc: Detected 2500.004 MHz processor Feb 9 19:02:27.222167 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:02:27.222183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:02:27.222195 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 9 19:02:27.222208 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:02:27.222221 kernel: Using GB pages for direct mapping Feb 9 19:02:27.222234 kernel: ACPI: Early table checksum verification disabled Feb 9 19:02:27.222247 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 9 19:02:27.222261 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 9 19:02:27.222274 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:02:27.222287 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 9 19:02:27.222303 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 9 19:02:27.222315 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 19:02:27.222328 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:02:27.222341 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 9 19:02:27.222354 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:02:27.222366 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 9 19:02:27.222379 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 9 19:02:27.222392 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 19:02:27.222408 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 9 19:02:27.222421 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 9 19:02:27.222435 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 9 19:02:27.222454 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 9 19:02:27.222468 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 9 19:02:27.222482 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 9 19:02:27.222496 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 9 19:02:27.222512 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 9 19:02:27.222525 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 9 19:02:27.222539 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 9 19:02:27.222553 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:02:27.222567 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:02:27.222580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 9 19:02:27.222594 kernel: NUMA: Initialized distance table, cnt=1 Feb 9 19:02:27.222608 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 9 19:02:27.222625 kernel: Zone ranges: Feb 9 19:02:27.222639 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:02:27.222653 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 9 19:02:27.222677 kernel: Normal empty Feb 9 19:02:27.222691 kernel: Movable zone start for each node Feb 9 19:02:27.222704 kernel: Early memory node ranges Feb 9 19:02:27.222718 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:02:27.222732 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 9 19:02:27.222745 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 9 19:02:27.222761 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:02:27.222775 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:02:27.222789 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 9 19:02:27.222802 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:02:27.222816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:02:27.222830 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 9 19:02:27.222844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:02:27.222859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:02:27.222873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:02:27.222890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:02:27.222904 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:02:27.222918 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:02:27.222933 kernel: TSC deadline timer available Feb 9 19:02:27.222947 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:02:27.222960 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 9 19:02:27.222972 kernel: Booting paravirtualized kernel on KVM Feb 9 19:02:27.222986 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:02:27.223001 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:02:27.223018 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:02:27.223033 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:02:27.223046 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:02:27.223060 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 9 19:02:27.223074 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:02:27.223088 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:02:27.223102 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 9 19:02:27.223297 kernel: Policy zone: DMA32 Feb 9 19:02:27.223318 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:02:27.223338 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:02:27.223352 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:02:27.223366 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:02:27.223378 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:02:27.223393 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 9 19:02:27.223406 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:02:27.223421 kernel: Kernel/User page tables isolation: enabled Feb 9 19:02:27.223434 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:02:27.223452 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:02:27.223466 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:02:27.223481 kernel: rcu: RCU event tracing is enabled. Feb 9 19:02:27.223495 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:02:27.223510 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:02:27.223524 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:02:27.223539 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:02:27.223553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:02:27.223567 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:02:27.223584 kernel: random: crng init done Feb 9 19:02:27.223596 kernel: Console: colour VGA+ 80x25 Feb 9 19:02:27.223610 kernel: printk: console [ttyS0] enabled Feb 9 19:02:27.223622 kernel: ACPI: Core revision 20210730 Feb 9 19:02:27.223636 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 9 19:02:27.223649 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:02:27.223663 kernel: x2apic enabled Feb 9 19:02:27.223695 kernel: Switched APIC routing to physical x2apic. Feb 9 19:02:27.223708 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 9 19:02:27.223724 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Feb 9 19:02:27.223737 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:02:27.223750 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:02:27.223764 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:02:27.223787 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:02:27.223804 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:02:27.223817 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:02:27.223832 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:02:27.223846 kernel: RETBleed: Vulnerable Feb 9 19:02:27.223859 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:02:27.223874 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:02:27.223887 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:02:27.223901 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:02:27.223915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:02:27.223933 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:02:27.223947 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:02:27.223961 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 19:02:27.223975 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 19:02:27.223989 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:02:27.224071 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:02:27.224086 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:02:27.224100 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 9 19:02:27.224114 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:02:27.224128 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 19:02:27.224142 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 19:02:27.224156 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 9 19:02:27.224226 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 9 19:02:27.224242 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 9 19:02:27.224257 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 9 19:02:27.224272 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 9 19:02:27.224287 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:02:27.224354 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:02:27.224374 kernel: LSM: Security Framework initializing Feb 9 19:02:27.224389 kernel: SELinux: Initializing. Feb 9 19:02:27.224405 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:02:27.224419 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:02:27.224434 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:02:27.224448 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:02:27.224463 kernel: signal: max sigframe size: 3632 Feb 9 19:02:27.224478 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:02:27.224491 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:02:27.224509 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:02:27.224524 kernel: x86: Booting SMP configuration: Feb 9 19:02:27.224539 kernel: .... node #0, CPUs: #1 Feb 9 19:02:27.224553 kernel: kvm-clock: cpu 1, msr 11faa041, secondary cpu clock Feb 9 19:02:27.224568 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 9 19:02:27.224583 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 19:02:27.224599 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:02:27.224614 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:02:27.224629 kernel: smpboot: Max logical packages: 1 Feb 9 19:02:27.224647 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Feb 9 19:02:27.224662 kernel: devtmpfs: initialized Feb 9 19:02:27.224698 kernel: x86/mm: Memory block size: 128MB Feb 9 19:02:27.224713 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:02:27.224728 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:02:27.224743 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:02:27.224758 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:02:27.224771 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:02:27.224787 kernel: audit: type=2000 audit(1707505345.742:1): state=initialized audit_enabled=0 res=1 Feb 9 19:02:27.224804 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:02:27.224819 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:02:27.224834 kernel: cpuidle: using governor menu Feb 9 19:02:27.224849 kernel: ACPI: bus type PCI registered Feb 9 19:02:27.224864 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:02:27.224879 kernel: dca service started, version 1.12.1 Feb 9 19:02:27.224894 kernel: PCI: Using configuration type 1 for base access Feb 9 19:02:27.224908 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:02:27.224923 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:02:27.224941 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:02:27.224956 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:02:27.224970 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:02:27.224985 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:02:27.224999 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:02:27.225014 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:02:27.225029 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:02:27.225044 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:02:27.225058 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 19:02:27.225116 kernel: ACPI: Interpreter enabled Feb 9 19:02:27.225134 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:02:27.225150 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:02:27.225164 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:02:27.225179 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 19:02:27.225194 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:02:27.225456 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:02:27.225598 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:02:27.225621 kernel: acpiphp: Slot [3] registered Feb 9 19:02:27.225637 kernel: acpiphp: Slot [4] registered Feb 9 19:02:27.225652 kernel: acpiphp: Slot [5] registered Feb 9 19:02:27.225683 kernel: acpiphp: Slot [6] registered Feb 9 19:02:27.225696 kernel: acpiphp: Slot [7] registered Feb 9 19:02:27.225708 kernel: acpiphp: Slot [8] registered Feb 9 19:02:27.225719 kernel: acpiphp: Slot [9] registered Feb 9 19:02:27.225731 kernel: acpiphp: Slot [10] registered Feb 9 19:02:27.225744 kernel: acpiphp: Slot [11] registered Feb 9 19:02:27.225761 kernel: acpiphp: Slot [12] registered Feb 9 19:02:27.225775 kernel: acpiphp: Slot [13] registered Feb 9 19:02:27.225787 kernel: acpiphp: Slot [14] registered Feb 9 19:02:27.225798 kernel: acpiphp: Slot [15] registered Feb 9 19:02:27.225810 kernel: acpiphp: Slot [16] registered Feb 9 19:02:27.225823 kernel: acpiphp: Slot [17] registered Feb 9 19:02:27.225836 kernel: acpiphp: Slot [18] registered Feb 9 19:02:27.225849 kernel: acpiphp: Slot [19] registered Feb 9 19:02:27.225862 kernel: acpiphp: Slot [20] registered Feb 9 19:02:27.225880 kernel: acpiphp: Slot [21] registered Feb 9 19:02:27.225893 kernel: acpiphp: Slot [22] registered Feb 9 19:02:27.225907 kernel: acpiphp: Slot [23] registered Feb 9 19:02:27.225921 kernel: acpiphp: Slot [24] registered Feb 9 19:02:27.225935 kernel: acpiphp: Slot [25] registered Feb 9 19:02:27.225949 kernel: acpiphp: Slot [26] registered Feb 9 19:02:27.225963 kernel: acpiphp: Slot [27] registered Feb 9 19:02:27.225977 kernel: acpiphp: Slot [28] registered Feb 9 19:02:27.225990 kernel: acpiphp: Slot [29] registered Feb 9 19:02:27.226004 kernel: acpiphp: Slot [30] registered Feb 9 19:02:27.226021 kernel: acpiphp: Slot [31] registered Feb 9 19:02:27.226037 kernel: PCI host bridge to bus 0000:00 Feb 9 19:02:27.226199 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:02:27.226320 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:02:27.226434 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:02:27.226640 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:02:27.226796 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:02:27.227000 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:02:27.227150 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:02:27.227287 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 9 19:02:27.227527 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:02:27.227755 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:02:27.227888 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 9 19:02:27.228015 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 9 19:02:27.228229 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 9 19:02:27.228365 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 9 19:02:27.228493 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 9 19:02:27.228622 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 9 19:02:27.228937 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 14648 usecs Feb 9 19:02:27.229077 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 9 19:02:27.229209 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 9 19:02:27.229343 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 9 19:02:27.229542 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:02:27.229693 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:02:27.229884 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 9 19:02:27.230022 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:02:27.230199 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 9 19:02:27.230224 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:02:27.230240 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:02:27.230256 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:02:27.230270 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:02:27.230285 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:02:27.230300 kernel: iommu: Default domain type: Translated Feb 9 19:02:27.230315 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:02:27.230518 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 9 19:02:27.230660 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:02:27.230969 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 9 19:02:27.230988 kernel: vgaarb: loaded Feb 9 19:02:27.231005 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:02:27.231024 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:02:27.231043 kernel: PTP clock support registered Feb 9 19:02:27.231061 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:02:27.231074 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:02:27.231089 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:02:27.231107 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 9 19:02:27.231131 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 19:02:27.231146 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 9 19:02:27.231161 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:02:27.231175 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:02:27.231191 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:02:27.231205 kernel: pnp: PnP ACPI init Feb 9 19:02:27.231220 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:02:27.231235 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:02:27.231293 kernel: NET: Registered PF_INET protocol family Feb 9 19:02:27.231310 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:02:27.231324 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:02:27.231339 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:02:27.231354 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:02:27.231368 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:02:27.231383 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:02:27.231398 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:02:27.231412 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:02:27.231430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:02:27.231444 kernel: NET: Registered PF_XDP protocol family Feb 9 19:02:27.231575 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:02:27.231764 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:02:27.231923 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:02:27.232040 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:02:27.232221 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:02:27.232355 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:02:27.232378 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:02:27.232454 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:02:27.232470 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 9 19:02:27.232485 kernel: clocksource: Switched to clocksource tsc Feb 9 19:02:27.232500 kernel: Initialise system trusted keyrings Feb 9 19:02:27.232515 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:02:27.232529 kernel: Key type asymmetric registered Feb 9 19:02:27.232544 kernel: Asymmetric key parser 'x509' registered Feb 9 19:02:27.232562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:02:27.232577 kernel: io scheduler mq-deadline registered Feb 9 19:02:27.232591 kernel: io scheduler kyber registered Feb 9 19:02:27.232606 kernel: io scheduler bfq registered Feb 9 19:02:27.232620 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:02:27.232635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:02:27.232650 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:02:27.232665 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:02:27.232691 kernel: i8042: Warning: Keylock active Feb 9 19:02:27.232756 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:02:27.232773 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:02:27.233173 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 19:02:27.233299 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 19:02:27.233485 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T19:02:26 UTC (1707505346) Feb 9 19:02:27.233599 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 19:02:27.233617 kernel: intel_pstate: CPU model not supported Feb 9 19:02:27.233632 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:02:27.233651 kernel: Segment Routing with IPv6 Feb 9 19:02:27.233675 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:02:27.233691 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:02:27.233705 kernel: Key type dns_resolver registered Feb 9 19:02:27.233720 kernel: IPI shorthand broadcast: enabled Feb 9 19:02:27.233735 kernel: sched_clock: Marking stable (560497470, 335320184)->(1011133211, -115315557) Feb 9 19:02:27.233749 kernel: registered taskstats version 1 Feb 9 19:02:27.233763 kernel: Loading compiled-in X.509 certificates Feb 9 19:02:27.233778 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:02:27.233795 kernel: Key type .fscrypt registered Feb 9 19:02:27.233809 kernel: Key type fscrypt-provisioning registered Feb 9 19:02:27.233824 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:02:27.233838 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:02:27.233853 kernel: ima: No architecture policies found Feb 9 19:02:27.233902 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:02:27.233917 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:02:27.233931 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:02:27.233946 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:02:27.233964 kernel: Run /init as init process Feb 9 19:02:27.233978 kernel: with arguments: Feb 9 19:02:27.233993 kernel: /init Feb 9 19:02:27.234048 kernel: with environment: Feb 9 19:02:27.234064 kernel: HOME=/ Feb 9 19:02:27.234079 kernel: TERM=linux Feb 9 19:02:27.234093 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:02:27.234111 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:27.234133 systemd[1]: Detected virtualization amazon. Feb 9 19:02:27.234149 systemd[1]: Detected architecture x86-64. Feb 9 19:02:27.234163 systemd[1]: Running in initrd. Feb 9 19:02:27.234178 systemd[1]: No hostname configured, using default hostname. Feb 9 19:02:27.234210 systemd[1]: Hostname set to . Feb 9 19:02:27.234232 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:02:27.234247 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:02:27.234263 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:02:27.234278 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:27.234294 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:27.234310 systemd[1]: Reached target paths.target. Feb 9 19:02:27.234325 systemd[1]: Reached target slices.target. Feb 9 19:02:27.234341 systemd[1]: Reached target swap.target. Feb 9 19:02:27.234357 systemd[1]: Reached target timers.target. Feb 9 19:02:27.234376 systemd[1]: Listening on iscsid.socket. Feb 9 19:02:27.234392 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:02:27.234408 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:02:27.234423 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:02:27.234436 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:02:27.234450 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:27.234465 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:27.234482 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:27.234498 systemd[1]: Reached target sockets.target. Feb 9 19:02:27.234513 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:27.234528 systemd[1]: Finished network-cleanup.service. Feb 9 19:02:27.234543 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:02:27.234558 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:27.234573 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:27.234588 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:27.234609 systemd-journald[185]: Journal started Feb 9 19:02:27.248747 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2f5da6211e36022ef86b0235e54d5c) is 4.8M, max 38.7M, 33.9M free. Feb 9 19:02:27.248837 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:02:27.238388 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 19:02:27.433834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:02:27.433871 kernel: Bridge firewalling registered Feb 9 19:02:27.433889 kernel: SCSI subsystem initialized Feb 9 19:02:27.433904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:02:27.433923 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:02:27.433941 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:02:27.433958 systemd[1]: Started systemd-journald.service. Feb 9 19:02:27.284827 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 19:02:27.333537 systemd-resolved[187]: Positive Trust Anchors: Feb 9 19:02:27.333549 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:27.333597 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:27.455506 kernel: audit: type=1130 audit(1707505347.440:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.337894 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 19:02:27.465893 kernel: audit: type=1130 audit(1707505347.456:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.338817 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 19:02:27.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.455473 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:27.457426 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:27.483019 kernel: audit: type=1130 audit(1707505347.465:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.483046 kernel: audit: type=1130 audit(1707505347.476:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.466441 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:02:27.483178 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:27.491169 kernel: audit: type=1130 audit(1707505347.484:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.491356 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:02:27.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.493522 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:27.498683 kernel: audit: type=1130 audit(1707505347.492:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.501543 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:02:27.503680 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:27.506024 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:02:27.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.523994 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:02:27.532417 kernel: audit: type=1130 audit(1707505347.524:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.532771 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:02:27.542976 kernel: audit: type=1130 audit(1707505347.534:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.534564 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:27.550970 kernel: audit: type=1130 audit(1707505347.543:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.552194 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:02:27.565112 dracut-cmdline[207]: dracut-dracut-053 Feb 9 19:02:27.568398 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:02:27.650694 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:02:27.666720 kernel: iscsi: registered transport (tcp) Feb 9 19:02:27.722925 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:02:27.723001 kernel: QLogic iSCSI HBA Driver Feb 9 19:02:27.771439 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:02:27.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.775102 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:02:27.830730 kernel: raid6: avx512x4 gen() 16160 MB/s Feb 9 19:02:27.847723 kernel: raid6: avx512x4 xor() 5386 MB/s Feb 9 19:02:27.865744 kernel: raid6: avx512x2 gen() 13866 MB/s Feb 9 19:02:27.883722 kernel: raid6: avx512x2 xor() 18510 MB/s Feb 9 19:02:27.900728 kernel: raid6: avx512x1 gen() 15250 MB/s Feb 9 19:02:27.918723 kernel: raid6: avx512x1 xor() 20225 MB/s Feb 9 19:02:27.936725 kernel: raid6: avx2x4 gen() 15629 MB/s Feb 9 19:02:27.954717 kernel: raid6: avx2x4 xor() 6358 MB/s Feb 9 19:02:27.972728 kernel: raid6: avx2x2 gen() 15899 MB/s Feb 9 19:02:27.990709 kernel: raid6: avx2x2 xor() 14208 MB/s Feb 9 19:02:28.008727 kernel: raid6: avx2x1 gen() 12114 MB/s Feb 9 19:02:28.026710 kernel: raid6: avx2x1 xor() 13073 MB/s Feb 9 19:02:28.044719 kernel: raid6: sse2x4 gen() 8078 MB/s Feb 9 19:02:28.062713 kernel: raid6: sse2x4 xor() 5167 MB/s Feb 9 19:02:28.080712 kernel: raid6: sse2x2 gen() 7562 MB/s Feb 9 19:02:28.098702 kernel: raid6: sse2x2 xor() 5236 MB/s Feb 9 19:02:28.115715 kernel: raid6: sse2x1 gen() 8660 MB/s Feb 9 19:02:28.133860 kernel: raid6: sse2x1 xor() 4133 MB/s Feb 9 19:02:28.133931 kernel: raid6: using algorithm avx512x4 gen() 16160 MB/s Feb 9 19:02:28.133960 kernel: raid6: .... xor() 5386 MB/s, rmw enabled Feb 9 19:02:28.135307 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:02:28.150695 kernel: xor: automatically using best checksumming function avx Feb 9 19:02:28.283697 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:02:28.294047 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:02:28.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.295000 audit: BPF prog-id=7 op=LOAD Feb 9 19:02:28.295000 audit: BPF prog-id=8 op=LOAD Feb 9 19:02:28.296772 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:28.316061 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 19:02:28.323235 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:28.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.325200 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:02:28.343200 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Feb 9 19:02:28.399154 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:02:28.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.403235 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:28.461158 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:28.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.532757 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:02:28.538596 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:02:28.538913 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:02:28.557704 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 9 19:02:28.572700 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:83:0b:11:23:b5 Feb 9 19:02:28.577103 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:02:28.873651 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:02:28.873717 kernel: AES CTR mode by8 optimization enabled Feb 9 19:02:28.873734 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:02:28.873971 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:02:28.873990 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:02:28.874133 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:02:28.874151 kernel: GPT:9289727 != 16777215 Feb 9 19:02:28.874168 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:02:28.874183 kernel: GPT:9289727 != 16777215 Feb 9 19:02:28.874203 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:02:28.874218 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:02:28.874233 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (433) Feb 9 19:02:28.769304 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:02:28.886782 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:02:28.889711 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:02:28.910917 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:02:28.922638 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:02:28.925781 systemd[1]: Starting disk-uuid.service... Feb 9 19:02:28.935649 disk-uuid[594]: Primary Header is updated. Feb 9 19:02:28.935649 disk-uuid[594]: Secondary Entries is updated. Feb 9 19:02:28.935649 disk-uuid[594]: Secondary Header is updated. Feb 9 19:02:28.942698 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:02:28.950704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:02:28.957740 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:02:29.955836 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:02:29.955905 disk-uuid[595]: The operation has completed successfully. Feb 9 19:02:30.120747 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:02:30.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.120859 systemd[1]: Finished disk-uuid.service. Feb 9 19:02:30.132414 systemd[1]: Starting verity-setup.service... Feb 9 19:02:30.166700 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:02:30.267897 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:02:30.271596 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:02:30.277205 systemd[1]: Finished verity-setup.service. Feb 9 19:02:30.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.390583 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:02:30.390501 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:02:30.391651 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:02:30.395177 systemd[1]: Starting ignition-setup.service... Feb 9 19:02:30.397377 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:02:30.432868 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:02:30.432929 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:02:30.432943 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:02:30.447693 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:02:30.466268 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:02:30.487260 systemd[1]: Finished ignition-setup.service. Feb 9 19:02:30.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.491034 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:02:30.545977 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:02:30.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.550000 audit: BPF prog-id=9 op=LOAD Feb 9 19:02:30.552430 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:30.599753 systemd-networkd[1108]: lo: Link UP Feb 9 19:02:30.599765 systemd-networkd[1108]: lo: Gained carrier Feb 9 19:02:30.602192 systemd-networkd[1108]: Enumeration completed Feb 9 19:02:30.603200 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:30.603249 systemd-networkd[1108]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:30.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.607716 systemd[1]: Reached target network.target. Feb 9 19:02:30.612183 systemd[1]: Starting iscsiuio.service... Feb 9 19:02:30.616886 systemd-networkd[1108]: eth0: Link UP Feb 9 19:02:30.616896 systemd-networkd[1108]: eth0: Gained carrier Feb 9 19:02:30.625599 systemd[1]: Started iscsiuio.service. Feb 9 19:02:30.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.629262 systemd[1]: Starting iscsid.service... Feb 9 19:02:30.636854 iscsid[1113]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:02:30.636854 iscsid[1113]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:02:30.636854 iscsid[1113]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:02:30.636854 iscsid[1113]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:02:30.636854 iscsid[1113]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:02:30.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.638834 systemd[1]: Started iscsid.service. Feb 9 19:02:30.656034 iscsid[1113]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:02:30.641813 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:02:30.660679 systemd-networkd[1108]: eth0: DHCPv4 address 172.31.23.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:02:30.665097 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:02:30.668080 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:02:30.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.670041 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:30.673396 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:30.676546 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:02:30.693625 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:02:30.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.806240 ignition[1074]: Ignition 2.14.0 Feb 9 19:02:30.806254 ignition[1074]: Stage: fetch-offline Feb 9 19:02:30.806476 ignition[1074]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:30.806517 ignition[1074]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:30.819663 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:30.821267 ignition[1074]: Ignition finished successfully Feb 9 19:02:30.823641 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:02:30.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.833967 systemd[1]: Starting ignition-fetch.service... Feb 9 19:02:30.847680 ignition[1132]: Ignition 2.14.0 Feb 9 19:02:30.847726 ignition[1132]: Stage: fetch Feb 9 19:02:30.848082 ignition[1132]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:30.848317 ignition[1132]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:30.865310 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:30.867917 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:30.881975 ignition[1132]: INFO : PUT result: OK Feb 9 19:02:30.886226 ignition[1132]: DEBUG : parsed url from cmdline: "" Feb 9 19:02:30.886226 ignition[1132]: INFO : no config URL provided Feb 9 19:02:30.886226 ignition[1132]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:02:30.886226 ignition[1132]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:02:30.892599 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:30.892599 ignition[1132]: INFO : PUT result: OK Feb 9 19:02:30.892599 ignition[1132]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:02:30.897227 ignition[1132]: INFO : GET result: OK Feb 9 19:02:30.898547 ignition[1132]: DEBUG : parsing config with SHA512: 5dc1ffaae3146b44fa45b619889a5ebeea653f1886587b83e0be763c2fb07a6b7b1bf5a1e20ba0238f36dfcf2338ed69319bf59e35ec28b3d339cf7c23d218b1 Feb 9 19:02:30.924092 unknown[1132]: fetched base config from "system" Feb 9 19:02:30.924128 unknown[1132]: fetched base config from "system" Feb 9 19:02:30.924140 unknown[1132]: fetched user config from "aws" Feb 9 19:02:30.928845 ignition[1132]: fetch: fetch complete Feb 9 19:02:30.928853 ignition[1132]: fetch: fetch passed Feb 9 19:02:30.928937 ignition[1132]: Ignition finished successfully Feb 9 19:02:30.934225 systemd[1]: Finished ignition-fetch.service. Feb 9 19:02:30.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.937688 systemd[1]: Starting ignition-kargs.service... Feb 9 19:02:30.954418 ignition[1138]: Ignition 2.14.0 Feb 9 19:02:30.954430 ignition[1138]: Stage: kargs Feb 9 19:02:30.954634 ignition[1138]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:30.954683 ignition[1138]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:30.965417 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:30.966983 ignition[1138]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:30.968628 ignition[1138]: INFO : PUT result: OK Feb 9 19:02:30.972638 ignition[1138]: kargs: kargs passed Feb 9 19:02:30.972714 ignition[1138]: Ignition finished successfully Feb 9 19:02:30.975760 systemd[1]: Finished ignition-kargs.service. Feb 9 19:02:30.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.978916 systemd[1]: Starting ignition-disks.service... Feb 9 19:02:30.996145 ignition[1144]: Ignition 2.14.0 Feb 9 19:02:30.996157 ignition[1144]: Stage: disks Feb 9 19:02:30.996362 ignition[1144]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:30.996395 ignition[1144]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:31.010733 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:31.012199 ignition[1144]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:31.014096 ignition[1144]: INFO : PUT result: OK Feb 9 19:02:31.018345 ignition[1144]: disks: disks passed Feb 9 19:02:31.018689 ignition[1144]: Ignition finished successfully Feb 9 19:02:31.021284 systemd[1]: Finished ignition-disks.service. Feb 9 19:02:31.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.022516 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:02:31.026084 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:31.028366 systemd[1]: Reached target local-fs.target. Feb 9 19:02:31.029518 systemd[1]: Reached target sysinit.target. Feb 9 19:02:31.031533 systemd[1]: Reached target basic.target. Feb 9 19:02:31.037448 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:02:31.060826 systemd-fsck[1152]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:02:31.073337 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:02:31.086643 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 19:02:31.086727 kernel: audit: type=1130 audit(1707505351.075:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.079489 systemd[1]: Mounting sysroot.mount... Feb 9 19:02:31.096313 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:02:31.099035 systemd[1]: Mounted sysroot.mount. Feb 9 19:02:31.099277 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:02:31.107595 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:02:31.110568 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:02:31.110630 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:02:31.110659 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:02:31.122113 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:02:31.124131 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:02:31.133933 initrd-setup-root[1173]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:02:31.141727 initrd-setup-root[1181]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:02:31.147986 initrd-setup-root[1189]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:02:31.154025 initrd-setup-root[1197]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:02:31.205564 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:02:31.216433 kernel: audit: type=1130 audit(1707505351.208:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.210258 systemd[1]: Starting ignition-mount.service... Feb 9 19:02:31.217997 systemd[1]: Starting sysroot-boot.service... Feb 9 19:02:31.221688 bash[1214]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:02:31.234261 ignition[1215]: INFO : Ignition 2.14.0 Feb 9 19:02:31.234261 ignition[1215]: INFO : Stage: mount Feb 9 19:02:31.236607 ignition[1215]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:31.236607 ignition[1215]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:31.257456 ignition[1215]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:31.260065 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:31.263199 systemd[1]: Finished sysroot-boot.service. Feb 9 19:02:31.269790 kernel: audit: type=1130 audit(1707505351.263:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.269884 ignition[1215]: INFO : PUT result: OK Feb 9 19:02:31.273266 ignition[1215]: INFO : mount: mount passed Feb 9 19:02:31.274789 ignition[1215]: INFO : Ignition finished successfully Feb 9 19:02:31.276994 systemd[1]: Finished ignition-mount.service. Feb 9 19:02:31.289825 kernel: audit: type=1130 audit(1707505351.277:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.309204 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:02:31.325695 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1224) Feb 9 19:02:31.329688 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:02:31.329752 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:02:31.329779 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:02:31.335693 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:02:31.341176 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:02:31.344852 systemd[1]: Starting ignition-files.service... Feb 9 19:02:31.370038 ignition[1244]: INFO : Ignition 2.14.0 Feb 9 19:02:31.370038 ignition[1244]: INFO : Stage: files Feb 9 19:02:31.372421 ignition[1244]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:31.372421 ignition[1244]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:31.384186 ignition[1244]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:31.385713 ignition[1244]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:31.387739 ignition[1244]: INFO : PUT result: OK Feb 9 19:02:31.396647 ignition[1244]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:02:31.401575 ignition[1244]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:02:31.401575 ignition[1244]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:02:31.414612 ignition[1244]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:02:31.416813 ignition[1244]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:02:31.420203 unknown[1244]: wrote ssh authorized keys file for user: core Feb 9 19:02:31.422239 ignition[1244]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:02:31.424004 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:02:31.424004 ignition[1244]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:02:31.868858 ignition[1244]: INFO : GET result: OK Feb 9 19:02:32.114649 systemd-networkd[1108]: eth0: Gained IPv6LL Feb 9 19:02:32.146393 ignition[1244]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:02:32.150073 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:02:32.150073 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:02:32.150073 ignition[1244]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:02:32.556321 ignition[1244]: INFO : GET result: OK Feb 9 19:02:32.697404 ignition[1244]: DEBUG : file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:02:32.700532 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:02:32.700532 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:02:32.700532 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:32.714046 ignition[1244]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1130268300" Feb 9 19:02:32.718822 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1244) Feb 9 19:02:32.718850 ignition[1244]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1130268300": device or resource busy Feb 9 19:02:32.718850 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1130268300", trying btrfs: device or resource busy Feb 9 19:02:32.718850 ignition[1244]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1130268300" Feb 9 19:02:32.733057 ignition[1244]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1130268300" Feb 9 19:02:32.743719 ignition[1244]: INFO : op(3): [started] unmounting "/mnt/oem1130268300" Feb 9 19:02:32.747153 ignition[1244]: INFO : op(3): [finished] unmounting "/mnt/oem1130268300" Feb 9 19:02:32.747153 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:02:32.747153 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:02:32.747153 ignition[1244]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:02:32.745150 systemd[1]: mnt-oem1130268300.mount: Deactivated successfully. Feb 9 19:02:40.018848 ignition[1244]: INFO : GET result: OK Feb 9 19:02:40.376390 ignition[1244]: DEBUG : file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:02:40.379928 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:02:40.379928 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:02:40.379928 ignition[1244]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:02:40.445345 ignition[1244]: INFO : GET result: OK Feb 9 19:02:41.074674 ignition[1244]: DEBUG : file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:02:41.078001 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:02:41.078001 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:41.078001 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:41.078001 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:41.078001 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:41.078001 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:41.099724 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:41.099724 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:02:41.099724 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:41.116549 ignition[1244]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3057479207" Feb 9 19:02:41.120763 ignition[1244]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3057479207": device or resource busy Feb 9 19:02:41.120763 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3057479207", trying btrfs: device or resource busy Feb 9 19:02:41.120763 ignition[1244]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3057479207" Feb 9 19:02:41.132606 ignition[1244]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3057479207" Feb 9 19:02:41.132606 ignition[1244]: INFO : op(6): [started] unmounting "/mnt/oem3057479207" Feb 9 19:02:41.132606 ignition[1244]: INFO : op(6): [finished] unmounting "/mnt/oem3057479207" Feb 9 19:02:41.132606 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:02:41.132606 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:02:41.132606 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:41.155581 ignition[1244]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4267231148" Feb 9 19:02:41.157968 ignition[1244]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4267231148": device or resource busy Feb 9 19:02:41.157968 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4267231148", trying btrfs: device or resource busy Feb 9 19:02:41.157968 ignition[1244]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4267231148" Feb 9 19:02:41.157968 ignition[1244]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4267231148" Feb 9 19:02:41.157968 ignition[1244]: INFO : op(9): [started] unmounting "/mnt/oem4267231148" Feb 9 19:02:41.157968 ignition[1244]: INFO : op(9): [finished] unmounting "/mnt/oem4267231148" Feb 9 19:02:41.157968 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:02:41.166419 systemd[1]: mnt-oem4267231148.mount: Deactivated successfully. Feb 9 19:02:41.178495 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:41.184510 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:41.197783 ignition[1244]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1664661361" Feb 9 19:02:41.200041 ignition[1244]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1664661361": device or resource busy Feb 9 19:02:41.200041 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1664661361", trying btrfs: device or resource busy Feb 9 19:02:41.200041 ignition[1244]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1664661361" Feb 9 19:02:41.208254 ignition[1244]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1664661361" Feb 9 19:02:41.208254 ignition[1244]: INFO : op(c): [started] unmounting "/mnt/oem1664661361" Feb 9 19:02:41.208254 ignition[1244]: INFO : op(c): [finished] unmounting "/mnt/oem1664661361" Feb 9 19:02:41.208254 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:41.208254 ignition[1244]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:02:41.208254 ignition[1244]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(10): [started] processing unit "nvidia.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(10): [finished] processing unit "nvidia.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(16): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Feb 9 19:02:41.220448 ignition[1244]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:02:41.267345 ignition[1244]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:02:41.267345 ignition[1244]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:02:41.267345 ignition[1244]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:41.267345 ignition[1244]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:41.267345 ignition[1244]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:41.267345 ignition[1244]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:41.267345 ignition[1244]: INFO : files: files passed Feb 9 19:02:41.267345 ignition[1244]: INFO : Ignition finished successfully Feb 9 19:02:41.292651 systemd[1]: Finished ignition-files.service. Feb 9 19:02:41.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.298685 kernel: audit: type=1130 audit(1707505361.293:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.300787 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:02:41.303599 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:02:41.304389 systemd[1]: Starting ignition-quench.service... Feb 9 19:02:41.311172 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:02:41.311380 systemd[1]: Finished ignition-quench.service. Feb 9 19:02:41.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.316210 initrd-setup-root-after-ignition[1269]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:02:41.329616 kernel: audit: type=1130 audit(1707505361.314:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.329650 kernel: audit: type=1131 audit(1707505361.314:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.317015 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:02:41.329798 systemd[1]: Reached target ignition-complete.target. Feb 9 19:02:41.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.336092 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:02:41.346229 kernel: audit: type=1130 audit(1707505361.329:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.364950 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:02:41.365086 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:02:41.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.367875 systemd[1]: Reached target initrd-fs.target. Feb 9 19:02:41.381018 kernel: audit: type=1130 audit(1707505361.367:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.381047 kernel: audit: type=1131 audit(1707505361.367:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.381206 systemd[1]: Reached target initrd.target. Feb 9 19:02:41.381339 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:02:41.382321 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:02:41.396684 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:02:41.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.399461 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:02:41.406878 kernel: audit: type=1130 audit(1707505361.398:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.411513 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:02:41.413694 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:02:41.415892 systemd[1]: Stopped target timers.target. Feb 9 19:02:41.417723 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:02:41.418831 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:02:41.427740 kernel: audit: type=1131 audit(1707505361.422:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.423179 systemd[1]: Stopped target initrd.target. Feb 9 19:02:41.428733 systemd[1]: Stopped target basic.target. Feb 9 19:02:41.431525 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:02:41.433542 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:02:41.435708 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:02:41.437975 systemd[1]: Stopped target remote-fs.target. Feb 9 19:02:41.439957 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:02:41.442061 systemd[1]: Stopped target sysinit.target. Feb 9 19:02:41.443982 systemd[1]: Stopped target local-fs.target. Feb 9 19:02:41.447649 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:02:41.450796 systemd[1]: Stopped target swap.target. Feb 9 19:02:41.453215 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:02:41.454650 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:02:41.470915 kernel: audit: type=1131 audit(1707505361.455:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.456005 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:02:41.480186 kernel: audit: type=1131 audit(1707505361.471:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.467480 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:02:41.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.467872 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:02:41.472277 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:02:41.472436 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:02:41.481542 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:02:41.482174 systemd[1]: Stopped ignition-files.service. Feb 9 19:02:41.487894 systemd[1]: Stopping ignition-mount.service... Feb 9 19:02:41.488019 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:02:41.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.488175 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:02:41.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.496322 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:02:41.507279 ignition[1282]: INFO : Ignition 2.14.0 Feb 9 19:02:41.507279 ignition[1282]: INFO : Stage: umount Feb 9 19:02:41.507279 ignition[1282]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:41.507279 ignition[1282]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:02:41.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.501849 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:02:41.522253 ignition[1282]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:02:41.522253 ignition[1282]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:02:41.502133 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:02:41.526898 ignition[1282]: INFO : PUT result: OK Feb 9 19:02:41.505518 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:02:41.505733 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:02:41.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.521998 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:02:41.522122 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:02:41.536245 ignition[1282]: INFO : umount: umount passed Feb 9 19:02:41.537300 ignition[1282]: INFO : Ignition finished successfully Feb 9 19:02:41.538019 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:02:41.538187 systemd[1]: Stopped ignition-mount.service. Feb 9 19:02:41.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.541698 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:02:41.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.541763 systemd[1]: Stopped ignition-disks.service. Feb 9 19:02:41.542797 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:02:41.542856 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:02:41.543857 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:02:41.543950 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:02:41.545029 systemd[1]: Stopped target network.target. Feb 9 19:02:41.545909 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:02:41.545964 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:02:41.547127 systemd[1]: Stopped target paths.target. Feb 9 19:02:41.549170 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:02:41.559838 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:02:41.563578 systemd[1]: Stopped target slices.target. Feb 9 19:02:41.565907 systemd[1]: Stopped target sockets.target. Feb 9 19:02:41.567719 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:02:41.567768 systemd[1]: Closed iscsid.socket. Feb 9 19:02:41.570605 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:02:41.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.570644 systemd[1]: Closed iscsiuio.socket. Feb 9 19:02:41.573985 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:02:41.576398 systemd[1]: Stopped ignition-setup.service. Feb 9 19:02:41.580934 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:41.584913 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:02:41.585732 systemd-networkd[1108]: eth0: DHCPv6 lease lost Feb 9 19:02:41.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.588174 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:02:41.589558 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:02:41.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.596000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:02:41.590745 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:41.590919 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:41.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.596952 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:02:41.596997 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:02:41.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.608336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:02:41.608524 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:02:41.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.612997 systemd[1]: Stopping network-cleanup.service... Feb 9 19:02:41.616275 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:02:41.616354 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:02:41.618869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:02:41.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.618915 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:02:41.621729 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:02:41.621833 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:02:41.625499 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:02:41.633994 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:02:41.634240 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:02:41.639130 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:02:41.639315 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:02:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.653000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:02:41.654812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:02:41.654860 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:02:41.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.655927 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:02:41.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.655964 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:02:41.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.657526 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:02:41.657593 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:02:41.660385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:02:41.660437 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:02:41.663098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:02:41.663142 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:02:41.672646 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:02:41.684797 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:02:41.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.684902 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:02:41.690225 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:02:41.691642 systemd[1]: Stopped network-cleanup.service. Feb 9 19:02:41.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.693802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:02:41.696008 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:02:41.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:41.698593 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:02:41.702014 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:02:41.715935 systemd[1]: Switching root. Feb 9 19:02:41.736197 systemd-journald[185]: Journal stopped Feb 9 19:02:45.838236 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 9 19:02:45.838468 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:02:45.838497 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:02:45.838516 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:02:45.838536 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:02:45.838553 kernel: SELinux: policy capability open_perms=1 Feb 9 19:02:45.838575 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:02:45.838593 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:02:45.838610 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:02:45.838633 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:02:45.838650 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:02:45.838677 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:02:45.841365 systemd[1]: Successfully loaded SELinux policy in 70.951ms. Feb 9 19:02:45.841405 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.786ms. Feb 9 19:02:45.841431 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:45.841450 systemd[1]: Detected virtualization amazon. Feb 9 19:02:45.841472 systemd[1]: Detected architecture x86-64. Feb 9 19:02:45.841491 systemd[1]: Detected first boot. Feb 9 19:02:45.841509 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:02:45.841528 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:02:45.841546 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:02:45.841565 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:45.841584 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:45.841608 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:45.841633 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:02:45.841652 systemd[1]: Stopped iscsiuio.service. Feb 9 19:02:45.841707 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:02:45.841726 systemd[1]: Stopped iscsid.service. Feb 9 19:02:45.841744 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:02:45.841763 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:02:45.841784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:02:45.841802 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:02:45.841826 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:02:45.841844 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:02:45.841863 systemd[1]: Created slice system-getty.slice. Feb 9 19:02:45.841881 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:02:45.841899 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:02:45.841918 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:02:45.841938 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:02:45.841956 systemd[1]: Created slice user.slice. Feb 9 19:02:45.841975 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:45.841993 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:02:45.842011 systemd[1]: Set up automount boot.automount. Feb 9 19:02:45.842029 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:02:45.842047 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:02:45.842064 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:02:45.842085 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:02:45.842104 systemd[1]: Reached target integritysetup.target. Feb 9 19:02:45.842123 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:45.842141 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:45.842159 systemd[1]: Reached target slices.target. Feb 9 19:02:45.842176 systemd[1]: Reached target swap.target. Feb 9 19:02:45.842196 systemd[1]: Reached target torcx.target. Feb 9 19:02:45.842213 systemd[1]: Reached target veritysetup.target. Feb 9 19:02:45.842232 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:02:45.842249 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:02:45.842270 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:45.842288 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:45.842307 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:45.842324 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:02:45.842343 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:02:45.842380 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:02:45.842398 systemd[1]: Mounting media.mount... Feb 9 19:02:45.842416 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:45.842431 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:02:45.842458 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:02:45.842475 systemd[1]: Mounting tmp.mount... Feb 9 19:02:45.842493 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:02:45.842509 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:02:45.842526 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:45.842543 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:02:45.842580 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:02:45.842598 systemd[1]: Starting modprobe@drm.service... Feb 9 19:02:45.842617 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:02:45.842638 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:02:45.842655 systemd[1]: Starting modprobe@loop.service... Feb 9 19:02:45.842694 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:02:45.842719 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:02:45.842738 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:02:45.842757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:02:45.842776 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:02:45.842797 systemd[1]: Stopped systemd-journald.service. Feb 9 19:02:45.842818 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:45.842842 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:45.842864 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:02:45.842887 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:02:45.842910 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:45.842930 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:02:45.842948 systemd[1]: Stopped verity-setup.service. Feb 9 19:02:45.842977 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:45.842999 kernel: fuse: init (API version 7.34) Feb 9 19:02:45.843026 kernel: loop: module loaded Feb 9 19:02:45.843044 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:02:45.843063 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:02:45.843083 systemd[1]: Mounted media.mount. Feb 9 19:02:45.843102 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:02:45.843121 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:02:45.843142 systemd[1]: Mounted tmp.mount. Feb 9 19:02:45.843163 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:45.843182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:02:45.843201 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:02:45.843220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:02:45.843240 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:02:45.843261 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:02:45.843284 systemd-journald[1395]: Journal started Feb 9 19:02:45.843357 systemd-journald[1395]: Runtime Journal (/run/log/journal/ec2f5da6211e36022ef86b0235e54d5c) is 4.8M, max 38.7M, 33.9M free. Feb 9 19:02:41.967000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:02:42.049000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:42.049000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:42.049000 audit: BPF prog-id=10 op=LOAD Feb 9 19:02:42.049000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:02:42.049000 audit: BPF prog-id=11 op=LOAD Feb 9 19:02:42.049000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:02:42.231000 audit[1315]: AVC avc: denied { associate } for pid=1315 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:42.231000 audit[1315]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1298 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:42.231000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:42.233000 audit[1315]: AVC avc: denied { associate } for pid=1315 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:42.233000 audit[1315]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=1298 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:42.233000 audit: CWD cwd="/" Feb 9 19:02:42.233000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:42.233000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:42.233000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:45.523000 audit: BPF prog-id=12 op=LOAD Feb 9 19:02:45.523000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:02:45.524000 audit: BPF prog-id=13 op=LOAD Feb 9 19:02:45.524000 audit: BPF prog-id=14 op=LOAD Feb 9 19:02:45.524000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:02:45.524000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:02:45.525000 audit: BPF prog-id=15 op=LOAD Feb 9 19:02:45.525000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:02:45.525000 audit: BPF prog-id=16 op=LOAD Feb 9 19:02:45.525000 audit: BPF prog-id=17 op=LOAD Feb 9 19:02:45.525000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:02:45.525000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:02:45.527000 audit: BPF prog-id=18 op=LOAD Feb 9 19:02:45.527000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:02:45.527000 audit: BPF prog-id=19 op=LOAD Feb 9 19:02:45.847354 systemd[1]: Finished modprobe@drm.service. Feb 9 19:02:45.527000 audit: BPF prog-id=20 op=LOAD Feb 9 19:02:45.527000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:02:45.527000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:02:45.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.532000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:02:45.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.756000 audit: BPF prog-id=21 op=LOAD Feb 9 19:02:45.756000 audit: BPF prog-id=22 op=LOAD Feb 9 19:02:45.756000 audit: BPF prog-id=23 op=LOAD Feb 9 19:02:45.756000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:02:45.756000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:02:45.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.836000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:45.836000 audit[1395]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffe6f09fb50 a2=4000 a3=7ffe6f09fbec items=0 ppid=1 pid=1395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:45.836000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:45.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:42.222537 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:45.522827 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:02:42.224581 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:45.529126 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:02:45.850731 systemd[1]: Started systemd-journald.service. Feb 9 19:02:42.224613 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:02:42.224659 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:02:42.224689 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:02:42.224738 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:02:42.224760 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:02:42.225032 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:02:42.225082 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:42.225102 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:02:42.228096 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:02:42.228152 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:02:42.228183 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:02:42.228207 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:02:42.228234 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:02:42.228257 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:02:45.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:44.879118 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:45.853622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:02:44.879391 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:45.853825 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:02:44.879507 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:45.855418 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:02:44.879704 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:45.855714 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:02:44.879794 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:02:44.879882 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-02-09T19:02:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:02:45.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.857340 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:02:45.857516 systemd[1]: Finished modprobe@loop.service. Feb 9 19:02:45.859126 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:45.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.861231 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:02:45.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.863212 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:02:45.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.864988 systemd[1]: Reached target network-pre.target. Feb 9 19:02:45.867814 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:02:45.870336 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:02:45.874114 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:02:45.878108 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:02:45.882307 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:02:45.883488 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:02:45.885382 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:02:45.886538 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:02:45.888383 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:45.891368 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:02:45.894326 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:02:45.906797 systemd-journald[1395]: Time spent on flushing to /var/log/journal/ec2f5da6211e36022ef86b0235e54d5c is 106.779ms for 1188 entries. Feb 9 19:02:45.906797 systemd-journald[1395]: System Journal (/var/log/journal/ec2f5da6211e36022ef86b0235e54d5c) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:02:46.035206 systemd-journald[1395]: Received client request to flush runtime journal. Feb 9 19:02:45.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:45.922554 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:02:45.924777 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:02:45.947331 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:45.973716 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:02:45.976612 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:02:46.036406 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:02:46.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.045423 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:02:46.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.069752 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:46.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.074134 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:02:46.088906 udevadm[1431]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:02:46.811136 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:02:46.825593 kernel: kauditd_printk_skb: 106 callbacks suppressed Feb 9 19:02:46.826018 kernel: audit: type=1130 audit(1707505366.815:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.824000 audit: BPF prog-id=24 op=LOAD Feb 9 19:02:46.825000 audit: BPF prog-id=25 op=LOAD Feb 9 19:02:46.828952 kernel: audit: type=1334 audit(1707505366.824:145): prog-id=24 op=LOAD Feb 9 19:02:46.829057 kernel: audit: type=1334 audit(1707505366.825:146): prog-id=25 op=LOAD Feb 9 19:02:46.829130 kernel: audit: type=1334 audit(1707505366.825:147): prog-id=7 op=UNLOAD Feb 9 19:02:46.829287 kernel: audit: type=1334 audit(1707505366.825:148): prog-id=8 op=UNLOAD Feb 9 19:02:46.825000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:02:46.825000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:02:46.826964 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:46.863568 systemd-udevd[1432]: Using default interface naming scheme 'v252'. Feb 9 19:02:46.957660 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:46.977003 kernel: audit: type=1130 audit(1707505366.963:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.977125 kernel: audit: type=1334 audit(1707505366.974:150): prog-id=26 op=LOAD Feb 9 19:02:46.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:46.974000 audit: BPF prog-id=26 op=LOAD Feb 9 19:02:46.980142 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:46.993000 audit: BPF prog-id=27 op=LOAD Feb 9 19:02:46.998903 kernel: audit: type=1334 audit(1707505366.993:151): prog-id=27 op=LOAD Feb 9 19:02:46.998945 kernel: audit: type=1334 audit(1707505366.994:152): prog-id=28 op=LOAD Feb 9 19:02:46.998975 kernel: audit: type=1334 audit(1707505366.996:153): prog-id=29 op=LOAD Feb 9 19:02:46.994000 audit: BPF prog-id=28 op=LOAD Feb 9 19:02:46.996000 audit: BPF prog-id=29 op=LOAD Feb 9 19:02:46.998108 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:02:47.063621 systemd[1]: Started systemd-userdbd.service. Feb 9 19:02:47.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.073858 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:02:47.081362 (udev-worker)[1435]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:02:47.152735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:02:47.160043 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:02:47.160416 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 9 19:02:47.171859 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 19:02:47.235343 systemd-networkd[1448]: lo: Link UP Feb 9 19:02:47.235361 systemd-networkd[1448]: lo: Gained carrier Feb 9 19:02:47.236063 systemd-networkd[1448]: Enumeration completed Feb 9 19:02:47.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.236208 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:47.236221 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:47.241279 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:47.243534 systemd-networkd[1448]: eth0: Link UP Feb 9 19:02:47.243712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:02:47.243999 systemd-networkd[1448]: eth0: Gained carrier Feb 9 19:02:47.196000 audit[1436]: AVC avc: denied { confidentiality } for pid=1436 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:47.258928 systemd-networkd[1448]: eth0: DHCPv4 address 172.31.23.81/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:02:47.196000 audit[1436]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d60b316a90 a1=32194 a2=7efc8edbebc5 a3=5 items=108 ppid=1432 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:47.196000 audit: CWD cwd="/" Feb 9 19:02:47.196000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=1 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=2 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=3 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=4 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=5 name=(null) inode=14589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=6 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=7 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=8 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=9 name=(null) inode=14591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=10 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=11 name=(null) inode=14592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=12 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=13 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=14 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=15 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=16 name=(null) inode=14590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=17 name=(null) inode=14595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=18 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=19 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=20 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=21 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=22 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=23 name=(null) inode=14598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=24 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=25 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=26 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=27 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=28 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=29 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=30 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=31 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=32 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=33 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=34 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=35 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=36 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=37 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=38 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=39 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=40 name=(null) inode=14602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=41 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=42 name=(null) inode=14587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=43 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=44 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=45 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=46 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=47 name=(null) inode=14610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=48 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=49 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=50 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=51 name=(null) inode=14612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=52 name=(null) inode=14608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=53 name=(null) inode=14613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=55 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=56 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=57 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=58 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=59 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=60 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=61 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=62 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=63 name=(null) inode=14618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=64 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=65 name=(null) inode=14619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=66 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=67 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=68 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=69 name=(null) inode=14621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=70 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=71 name=(null) inode=14622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=72 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=73 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=74 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=75 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=76 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=77 name=(null) inode=14625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=78 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=79 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=80 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=81 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=82 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=83 name=(null) inode=14628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=84 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=85 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=86 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=87 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=88 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=89 name=(null) inode=14631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=90 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=91 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=92 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=93 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=94 name=(null) inode=14629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=95 name=(null) inode=14634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=96 name=(null) inode=14614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=97 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=98 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=99 name=(null) inode=14636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=100 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=101 name=(null) inode=14637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=102 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=103 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=104 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=105 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=106 name=(null) inode=14635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PATH item=107 name=(null) inode=14640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:47.196000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:02:47.294851 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:02:47.309700 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 9 19:02:47.326695 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:02:47.345707 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1433) Feb 9 19:02:47.464103 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:02:47.561106 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:02:47.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.563778 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:02:47.589145 lvm[1546]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:47.614007 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:02:47.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.615458 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:47.617806 systemd[1]: Starting lvm2-activation.service... Feb 9 19:02:47.623085 lvm[1547]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:47.647169 systemd[1]: Finished lvm2-activation.service. Feb 9 19:02:47.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.650036 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:47.652969 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:02:47.653004 systemd[1]: Reached target local-fs.target. Feb 9 19:02:47.657180 systemd[1]: Reached target machines.target. Feb 9 19:02:47.661135 systemd[1]: Starting ldconfig.service... Feb 9 19:02:47.663451 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:02:47.663552 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:47.666476 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:02:47.670280 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:02:47.674188 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:02:47.676214 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:47.676310 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:47.678416 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:02:47.688176 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1549 (bootctl) Feb 9 19:02:47.691138 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:02:47.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.722869 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:02:47.740037 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:02:47.751151 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:02:47.770186 systemd-tmpfiles[1552]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:02:47.899849 systemd-fsck[1557]: fsck.fat 4.2 (2021-01-31) Feb 9 19:02:47.899849 systemd-fsck[1557]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 9 19:02:47.903479 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:02:47.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:47.906892 systemd[1]: Mounting boot.mount... Feb 9 19:02:47.944813 systemd[1]: Mounted boot.mount. Feb 9 19:02:47.994967 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:02:47.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.135116 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:02:48.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.138393 systemd[1]: Starting audit-rules.service... Feb 9 19:02:48.140937 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:02:48.143728 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:02:48.145000 audit: BPF prog-id=30 op=LOAD Feb 9 19:02:48.147507 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:48.152000 audit: BPF prog-id=31 op=LOAD Feb 9 19:02:48.155860 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:02:48.160264 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:02:48.191150 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:02:48.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.193221 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:02:48.196000 audit[1576]: SYSTEM_BOOT pid=1576 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.202927 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:02:48.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.290694 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:02:48.292401 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:02:48.294384 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:02:48.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:48.336000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:48.336000 audit[1591]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff07f3bf80 a2=420 a3=0 items=0 ppid=1571 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:48.336000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:48.337097 augenrules[1591]: No rules Feb 9 19:02:48.338481 systemd[1]: Finished audit-rules.service. Feb 9 19:02:48.362346 systemd-resolved[1574]: Positive Trust Anchors: Feb 9 19:02:48.362371 systemd-resolved[1574]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:48.362414 systemd-resolved[1574]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:48.368365 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:02:48.370317 systemd[1]: Reached target time-set.target. Feb 9 19:02:48.386966 systemd-timesyncd[1575]: Contacted time server 15.204.87.223:123 (0.flatcar.pool.ntp.org). Feb 9 19:02:48.387568 systemd-timesyncd[1575]: Initial clock synchronization to Fri 2024-02-09 19:02:48.545336 UTC. Feb 9 19:02:48.403803 systemd-resolved[1574]: Defaulting to hostname 'linux'. Feb 9 19:02:48.406766 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:48.408034 systemd[1]: Reached target network.target. Feb 9 19:02:48.409103 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:48.413496 ldconfig[1548]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:02:48.421187 systemd[1]: Finished ldconfig.service. Feb 9 19:02:48.424225 systemd[1]: Starting systemd-update-done.service... Feb 9 19:02:48.432312 systemd[1]: Finished systemd-update-done.service. Feb 9 19:02:48.433849 systemd[1]: Reached target sysinit.target. Feb 9 19:02:48.435124 systemd[1]: Started motdgen.path. Feb 9 19:02:48.436489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:02:48.439054 systemd[1]: Started logrotate.timer. Feb 9 19:02:48.440482 systemd[1]: Started mdadm.timer. Feb 9 19:02:48.441704 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:02:48.442988 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:02:48.443150 systemd[1]: Reached target paths.target. Feb 9 19:02:48.444285 systemd[1]: Reached target timers.target. Feb 9 19:02:48.446995 systemd[1]: Listening on dbus.socket. Feb 9 19:02:48.451495 systemd[1]: Starting docker.socket... Feb 9 19:02:48.456995 systemd[1]: Listening on sshd.socket. Feb 9 19:02:48.458231 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:48.458824 systemd[1]: Listening on docker.socket. Feb 9 19:02:48.459920 systemd[1]: Reached target sockets.target. Feb 9 19:02:48.460958 systemd[1]: Reached target basic.target. Feb 9 19:02:48.461938 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:48.461973 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:48.463174 systemd[1]: Starting containerd.service... Feb 9 19:02:48.466452 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:02:48.469503 systemd[1]: Starting dbus.service... Feb 9 19:02:48.472120 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:02:48.481771 systemd[1]: Starting extend-filesystems.service... Feb 9 19:02:48.486073 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:02:48.490033 systemd[1]: Starting motdgen.service... Feb 9 19:02:48.493253 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:02:48.496813 systemd[1]: Starting prepare-critools.service... Feb 9 19:02:48.500326 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:02:48.504081 systemd[1]: Starting sshd-keygen.service... Feb 9 19:02:48.515290 systemd[1]: Starting systemd-logind.service... Feb 9 19:02:48.596910 jq[1603]: false Feb 9 19:02:48.518866 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:48.518934 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:02:48.521034 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:02:48.609939 jq[1614]: true Feb 9 19:02:48.522932 systemd[1]: Starting update-engine.service... Feb 9 19:02:48.613541 tar[1617]: ./ Feb 9 19:02:48.613541 tar[1617]: ./macvlan Feb 9 19:02:48.525816 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:02:48.619881 tar[1618]: crictl Feb 9 19:02:48.541635 systemd[1]: Created slice system-sshd.slice. Feb 9 19:02:48.620456 dbus-daemon[1602]: [system] SELinux support is enabled Feb 9 19:02:48.599373 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:02:48.599588 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:02:48.636074 dbus-daemon[1602]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1448 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:02:48.601273 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:02:48.601467 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:02:48.620637 systemd[1]: Started dbus.service. Feb 9 19:02:48.626251 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:02:48.626287 systemd[1]: Reached target system-config.target. Feb 9 19:02:48.627842 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:02:48.627866 systemd[1]: Reached target user-config.target. Feb 9 19:02:48.646274 dbus-daemon[1602]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:02:48.652042 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:02:48.666551 jq[1626]: true Feb 9 19:02:48.678556 extend-filesystems[1604]: Found nvme0n1 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p1 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p2 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p3 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found usr Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p4 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p6 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p7 Feb 9 19:02:48.680076 extend-filesystems[1604]: Found nvme0n1p9 Feb 9 19:02:48.696696 extend-filesystems[1604]: Checking size of /dev/nvme0n1p9 Feb 9 19:02:48.723096 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:02:48.723318 systemd[1]: Finished motdgen.service. Feb 9 19:02:48.755790 systemd-networkd[1448]: eth0: Gained IPv6LL Feb 9 19:02:48.759509 extend-filesystems[1604]: Resized partition /dev/nvme0n1p9 Feb 9 19:02:48.761367 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:48.770028 systemd[1]: Reached target network-online.target. Feb 9 19:02:48.776230 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:02:48.779785 systemd[1]: Started nvidia.service. Feb 9 19:02:48.810292 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:48.829212 extend-filesystems[1660]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:02:48.848722 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:02:48.860967 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:02:48.929983 update_engine[1613]: I0209 19:02:48.907510 1613 main.cc:92] Flatcar Update Engine starting Feb 9 19:02:48.929983 update_engine[1613]: I0209 19:02:48.918036 1613 update_check_scheduler.cc:74] Next update check in 7m36s Feb 9 19:02:48.914731 systemd[1]: Started update-engine.service. Feb 9 19:02:48.918589 systemd[1]: Started locksmithd.service. Feb 9 19:02:49.074579 env[1623]: time="2024-02-09T19:02:49.074266805Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:02:49.102737 amazon-ssm-agent[1661]: 2024/02/09 19:02:49 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:02:49.117382 amazon-ssm-agent[1661]: Initializing new seelog logger Feb 9 19:02:49.122718 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:02:49.152124 amazon-ssm-agent[1661]: New Seelog Logger Creation Complete Feb 9 19:02:49.152124 amazon-ssm-agent[1661]: 2024/02/09 19:02:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:02:49.152124 amazon-ssm-agent[1661]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:02:49.152303 tar[1617]: ./static Feb 9 19:02:49.152770 amazon-ssm-agent[1661]: 2024/02/09 19:02:49 processing appconfig overrides Feb 9 19:02:49.156915 extend-filesystems[1660]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:02:49.156915 extend-filesystems[1660]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:02:49.156915 extend-filesystems[1660]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:02:49.165822 extend-filesystems[1604]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:02:49.164139 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:02:49.164347 systemd[1]: Finished extend-filesystems.service. Feb 9 19:02:49.174488 systemd-logind[1612]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:02:49.175644 systemd-logind[1612]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 19:02:49.175816 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:02:49.179346 systemd-logind[1612]: New seat seat0. Feb 9 19:02:49.191507 systemd[1]: Started systemd-logind.service. Feb 9 19:02:49.236933 dbus-daemon[1602]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:02:49.237596 dbus-daemon[1602]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1639 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:02:49.237660 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:02:49.243244 systemd[1]: Starting polkit.service... Feb 9 19:02:49.292125 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:02:49.304841 polkitd[1695]: Started polkitd version 121 Feb 9 19:02:49.326961 polkitd[1695]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:02:49.327047 polkitd[1695]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:02:49.333212 env[1623]: time="2024-02-09T19:02:49.333123984Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:02:49.333551 env[1623]: time="2024-02-09T19:02:49.333522720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:49.334727 polkitd[1695]: Finished loading, compiling and executing 2 rules Feb 9 19:02:49.335336 dbus-daemon[1602]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:02:49.335538 systemd[1]: Started polkit.service. Feb 9 19:02:49.338227 polkitd[1695]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:02:49.339728 tar[1617]: ./vlan Feb 9 19:02:49.340020 env[1623]: time="2024-02-09T19:02:49.339980444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:49.340123 env[1623]: time="2024-02-09T19:02:49.340103981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:49.340485 env[1623]: time="2024-02-09T19:02:49.340458169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:49.340575 env[1623]: time="2024-02-09T19:02:49.340560212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:49.340654 env[1623]: time="2024-02-09T19:02:49.340638649Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:02:49.340742 env[1623]: time="2024-02-09T19:02:49.340726702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:49.340915 env[1623]: time="2024-02-09T19:02:49.340896965Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:49.341256 env[1623]: time="2024-02-09T19:02:49.341237397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:49.350911 env[1623]: time="2024-02-09T19:02:49.350827668Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:49.358801 env[1623]: time="2024-02-09T19:02:49.358754835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:02:49.359140 env[1623]: time="2024-02-09T19:02:49.359114861Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:02:49.361194 env[1623]: time="2024-02-09T19:02:49.361163695Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:02:49.371465 env[1623]: time="2024-02-09T19:02:49.371423255Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:02:49.371690 env[1623]: time="2024-02-09T19:02:49.371653539Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:02:49.371807 env[1623]: time="2024-02-09T19:02:49.371788923Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:02:49.372050 env[1623]: time="2024-02-09T19:02:49.372022695Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372220 env[1623]: time="2024-02-09T19:02:49.372200992Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372304 env[1623]: time="2024-02-09T19:02:49.372289520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372380 env[1623]: time="2024-02-09T19:02:49.372360359Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372456 env[1623]: time="2024-02-09T19:02:49.372442444Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372542 env[1623]: time="2024-02-09T19:02:49.372527842Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372620 env[1623]: time="2024-02-09T19:02:49.372606260Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372719 env[1623]: time="2024-02-09T19:02:49.372696118Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.372803 env[1623]: time="2024-02-09T19:02:49.372788298Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:02:49.373021 env[1623]: time="2024-02-09T19:02:49.373002328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:02:49.373195 env[1623]: time="2024-02-09T19:02:49.373177504Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:02:49.373755 env[1623]: time="2024-02-09T19:02:49.373730067Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:02:49.373880 env[1623]: time="2024-02-09T19:02:49.373863492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.373964 env[1623]: time="2024-02-09T19:02:49.373947720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:02:49.374105 env[1623]: time="2024-02-09T19:02:49.374088443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374262 env[1623]: time="2024-02-09T19:02:49.374234339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374352 env[1623]: time="2024-02-09T19:02:49.374335457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374428 env[1623]: time="2024-02-09T19:02:49.374413131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374512 env[1623]: time="2024-02-09T19:02:49.374498055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374583 env[1623]: time="2024-02-09T19:02:49.374570249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374653 env[1623]: time="2024-02-09T19:02:49.374639597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374753 env[1623]: time="2024-02-09T19:02:49.374738072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.374846 env[1623]: time="2024-02-09T19:02:49.374831264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:02:49.375090 env[1623]: time="2024-02-09T19:02:49.375072454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.375181 env[1623]: time="2024-02-09T19:02:49.375164267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.375256 env[1623]: time="2024-02-09T19:02:49.375241478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.375400 env[1623]: time="2024-02-09T19:02:49.375385501Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:02:49.375487 env[1623]: time="2024-02-09T19:02:49.375470048Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:02:49.375554 env[1623]: time="2024-02-09T19:02:49.375540936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:02:49.375630 env[1623]: time="2024-02-09T19:02:49.375615917Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:02:49.375754 env[1623]: time="2024-02-09T19:02:49.375738964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:02:49.376145 env[1623]: time="2024-02-09T19:02:49.376078716Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:02:49.379465 env[1623]: time="2024-02-09T19:02:49.376295976Z" level=info msg="Connect containerd service" Feb 9 19:02:49.379465 env[1623]: time="2024-02-09T19:02:49.376349533Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:02:49.379465 env[1623]: time="2024-02-09T19:02:49.378041502Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:02:49.384349 systemd-hostnamed[1639]: Hostname set to (transient) Feb 9 19:02:49.384350 systemd-resolved[1574]: System hostname changed to 'ip-172-31-23-81'. Feb 9 19:02:49.385094 env[1623]: time="2024-02-09T19:02:49.385021615Z" level=info msg="Start subscribing containerd event" Feb 9 19:02:49.408621 env[1623]: time="2024-02-09T19:02:49.408576153Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:02:49.408876 env[1623]: time="2024-02-09T19:02:49.408855157Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:02:49.409147 systemd[1]: Started containerd.service. Feb 9 19:02:49.409629 env[1623]: time="2024-02-09T19:02:49.409581320Z" level=info msg="containerd successfully booted in 0.413504s" Feb 9 19:02:49.413747 env[1623]: time="2024-02-09T19:02:49.413713191Z" level=info msg="Start recovering state" Feb 9 19:02:49.420721 env[1623]: time="2024-02-09T19:02:49.420664924Z" level=info msg="Start event monitor" Feb 9 19:02:49.421318 env[1623]: time="2024-02-09T19:02:49.421281139Z" level=info msg="Start snapshots syncer" Feb 9 19:02:49.432105 env[1623]: time="2024-02-09T19:02:49.432058967Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:02:49.432265 env[1623]: time="2024-02-09T19:02:49.432248496Z" level=info msg="Start streaming server" Feb 9 19:02:49.507774 tar[1617]: ./portmap Feb 9 19:02:49.604107 coreos-metadata[1601]: Feb 09 19:02:49.604 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:02:49.613617 coreos-metadata[1601]: Feb 09 19:02:49.613 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:02:49.614716 coreos-metadata[1601]: Feb 09 19:02:49.614 INFO Fetch successful Feb 9 19:02:49.614883 coreos-metadata[1601]: Feb 09 19:02:49.614 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:02:49.617655 coreos-metadata[1601]: Feb 09 19:02:49.617 INFO Fetch successful Feb 9 19:02:49.621602 unknown[1601]: wrote ssh authorized keys file for user: core Feb 9 19:02:49.633094 tar[1617]: ./host-local Feb 9 19:02:49.650379 update-ssh-keys[1757]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:49.651590 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:02:49.773834 tar[1617]: ./vrf Feb 9 19:02:49.915394 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Create new startup processor Feb 9 19:02:49.916267 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:02:49.916419 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing bookkeeping folders Feb 9 19:02:49.916485 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO removing the completed state files Feb 9 19:02:49.916543 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:02:49.916596 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:02:49.916659 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing healthcheck folders for long running plugins Feb 9 19:02:49.916739 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing locations for inventory plugin Feb 9 19:02:49.916819 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing default location for custom inventory Feb 9 19:02:49.916874 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing default location for file inventory Feb 9 19:02:49.916932 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Initializing default location for role inventory Feb 9 19:02:49.917005 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Init the cloudwatchlogs publisher Feb 9 19:02:49.917067 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:02:49.917125 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:02:49.917178 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:02:49.917239 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:02:49.917302 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:02:49.917369 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:02:49.917441 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:02:49.917623 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:02:49.917698 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:02:49.917781 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:02:49.917845 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:02:49.917906 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO OS: linux, Arch: amd64 Feb 9 19:02:49.919397 amazon-ssm-agent[1661]: datastore file /var/lib/amazon/ssm/i-030830550ba5bbc92/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:02:49.956804 tar[1617]: ./bridge Feb 9 19:02:50.025135 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:02:50.119876 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:02:50.135049 tar[1617]: ./tuning Feb 9 19:02:50.214785 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:02:50.234205 tar[1617]: ./firewall Feb 9 19:02:50.309449 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:02:50.382050 tar[1617]: ./host-device Feb 9 19:02:50.404092 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:02:50.483991 tar[1617]: ./sbr Feb 9 19:02:50.499103 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [instanceID=i-030830550ba5bbc92] Starting association polling Feb 9 19:02:50.527923 sshd_keygen[1638]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:02:50.569145 tar[1617]: ./loopback Feb 9 19:02:50.581764 systemd[1]: Finished prepare-critools.service. Feb 9 19:02:50.594298 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:02:50.606555 systemd[1]: Finished sshd-keygen.service. Feb 9 19:02:50.611030 systemd[1]: Starting issuegen.service... Feb 9 19:02:50.613941 systemd[1]: Started sshd@0-172.31.23.81:22-139.178.68.195:53628.service. Feb 9 19:02:50.636624 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:02:50.644111 tar[1617]: ./dhcp Feb 9 19:02:50.636863 systemd[1]: Finished issuegen.service. Feb 9 19:02:50.641171 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:02:50.654123 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:02:50.657772 systemd[1]: Started getty@tty1.service. Feb 9 19:02:50.661190 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:02:50.663198 systemd[1]: Reached target getty.target. Feb 9 19:02:50.689534 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:02:50.781749 tar[1617]: ./ptp Feb 9 19:02:50.786423 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:02:50.836372 locksmithd[1675]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:02:50.839738 tar[1617]: ./ipvlan Feb 9 19:02:50.859722 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 53628 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:50.862927 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:50.887057 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:02:50.882143 systemd[1]: Created slice user-500.slice. Feb 9 19:02:50.884865 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:02:50.900194 tar[1617]: ./bandwidth Feb 9 19:02:50.910814 systemd-logind[1612]: New session 1 of user core. Feb 9 19:02:50.919539 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:02:50.925020 systemd[1]: Starting user@500.service... Feb 9 19:02:50.931785 (systemd)[1818]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:50.977304 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:02:50.978024 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:02:50.978919 systemd[1]: Reached target multi-user.target. Feb 9 19:02:50.982007 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:02:50.997357 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:02:50.997642 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:02:51.059653 systemd[1818]: Queued start job for default target default.target. Feb 9 19:02:51.060559 systemd[1818]: Reached target paths.target. Feb 9 19:02:51.060589 systemd[1818]: Reached target sockets.target. Feb 9 19:02:51.060608 systemd[1818]: Reached target timers.target. Feb 9 19:02:51.060626 systemd[1818]: Reached target basic.target. Feb 9 19:02:51.060750 systemd[1818]: Reached target default.target. Feb 9 19:02:51.060797 systemd[1818]: Startup finished in 117ms. Feb 9 19:02:51.060828 systemd[1]: Started user@500.service. Feb 9 19:02:51.063754 systemd[1]: Started session-1.scope. Feb 9 19:02:51.065106 systemd[1]: Startup finished in 954ms (kernel) + 14.969s (initrd) + 9.184s (userspace) = 25.108s. Feb 9 19:02:51.074878 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:02:51.171323 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:02:51.268345 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:02:51.347877 systemd[1]: Started sshd@1-172.31.23.81:22-139.178.68.195:53636.service. Feb 9 19:02:51.365958 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-030830550ba5bbc92, requestId: c9551da8-aaf2-4f76-b782-fdf82c872b2b Feb 9 19:02:51.463313 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [OfflineService] Starting document processing engine... Feb 9 19:02:51.524919 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 53636 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:51.526413 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:51.533803 systemd-logind[1612]: New session 2 of user core. Feb 9 19:02:51.533894 systemd[1]: Started session-2.scope. Feb 9 19:02:51.560363 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:02:51.657640 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:02:51.672794 sshd[1830]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:51.678119 systemd[1]: sshd@1-172.31.23.81:22-139.178.68.195:53636.service: Deactivated successfully. Feb 9 19:02:51.679205 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:02:51.679939 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:02:51.681422 systemd-logind[1612]: Removed session 2. Feb 9 19:02:51.700717 systemd[1]: Started sshd@2-172.31.23.81:22-139.178.68.195:53642.service. Feb 9 19:02:51.755734 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [OfflineService] Starting message polling Feb 9 19:02:51.853315 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [OfflineService] Starting send replies to MDS Feb 9 19:02:51.880787 sshd[1836]: Accepted publickey for core from 139.178.68.195 port 53642 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:51.885115 sshd[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:51.898717 systemd[1]: Started session-3.scope. Feb 9 19:02:51.899450 systemd-logind[1612]: New session 3 of user core. Feb 9 19:02:51.951997 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:02:52.026942 sshd[1836]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:52.031870 systemd[1]: sshd@2-172.31.23.81:22-139.178.68.195:53642.service: Deactivated successfully. Feb 9 19:02:52.032863 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:02:52.033649 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:02:52.034833 systemd-logind[1612]: Removed session 3. Feb 9 19:02:52.051215 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:02:52.052373 systemd[1]: Started sshd@3-172.31.23.81:22-139.178.68.195:53652.service. Feb 9 19:02:52.149841 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:02:52.219478 sshd[1842]: Accepted publickey for core from 139.178.68.195 port 53652 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:52.221188 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:52.228755 systemd-logind[1612]: New session 4 of user core. Feb 9 19:02:52.228846 systemd[1]: Started session-4.scope. Feb 9 19:02:52.248263 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [MessageGatewayService] listening reply. Feb 9 19:02:52.347013 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:02:52.357803 sshd[1842]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:52.360953 systemd[1]: sshd@3-172.31.23.81:22-139.178.68.195:53652.service: Deactivated successfully. Feb 9 19:02:52.361968 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:02:52.362647 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:02:52.363575 systemd-logind[1612]: Removed session 4. Feb 9 19:02:52.385490 systemd[1]: Started sshd@4-172.31.23.81:22-139.178.68.195:53662.service. Feb 9 19:02:52.445782 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:02:52.544855 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:02:52.551101 sshd[1848]: Accepted publickey for core from 139.178.68.195 port 53662 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:02:52.553278 sshd[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:52.558782 systemd-logind[1612]: New session 5 of user core. Feb 9 19:02:52.559332 systemd[1]: Started session-5.scope. Feb 9 19:02:52.644497 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:02:52.684637 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:02:52.685238 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:02:52.743856 amazon-ssm-agent[1661]: 2024-02-09 19:02:49 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:02:52.843919 amazon-ssm-agent[1661]: 2024-02-09 19:02:50 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-030830550ba5bbc92?role=subscribe&stream=input Feb 9 19:02:52.943862 amazon-ssm-agent[1661]: 2024-02-09 19:02:50 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-030830550ba5bbc92?role=subscribe&stream=input Feb 9 19:02:53.044049 amazon-ssm-agent[1661]: 2024-02-09 19:02:50 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:02:53.144735 amazon-ssm-agent[1661]: 2024-02-09 19:02:50 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:02:53.276567 systemd[1]: Reloading. Feb 9 19:02:53.404356 /usr/lib/systemd/system-generators/torcx-generator[1886]: time="2024-02-09T19:02:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:53.404401 /usr/lib/systemd/system-generators/torcx-generator[1886]: time="2024-02-09T19:02:53Z" level=info msg="torcx already run" Feb 9 19:02:53.553816 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:53.553903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:53.577447 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:53.731204 systemd[1]: Started kubelet.service. Feb 9 19:02:53.753674 systemd[1]: Starting coreos-metadata.service... Feb 9 19:02:53.845624 kubelet[1934]: E0209 19:02:53.845068 1934 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:02:53.848252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:02:53.848473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:02:53.895854 coreos-metadata[1941]: Feb 09 19:02:53.895 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:02:53.896710 coreos-metadata[1941]: Feb 09 19:02:53.896 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 9 19:02:53.897333 coreos-metadata[1941]: Feb 09 19:02:53.897 INFO Fetch successful Feb 9 19:02:53.897398 coreos-metadata[1941]: Feb 09 19:02:53.897 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 9 19:02:53.897875 coreos-metadata[1941]: Feb 09 19:02:53.897 INFO Fetch successful Feb 9 19:02:53.897959 coreos-metadata[1941]: Feb 09 19:02:53.897 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 9 19:02:53.898420 coreos-metadata[1941]: Feb 09 19:02:53.898 INFO Fetch successful Feb 9 19:02:53.898491 coreos-metadata[1941]: Feb 09 19:02:53.898 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 9 19:02:53.898921 coreos-metadata[1941]: Feb 09 19:02:53.898 INFO Fetch successful Feb 9 19:02:53.899002 coreos-metadata[1941]: Feb 09 19:02:53.898 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 9 19:02:53.900278 coreos-metadata[1941]: Feb 09 19:02:53.900 INFO Fetch successful Feb 9 19:02:53.900347 coreos-metadata[1941]: Feb 09 19:02:53.900 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 9 19:02:53.900940 coreos-metadata[1941]: Feb 09 19:02:53.900 INFO Fetch successful Feb 9 19:02:53.901011 coreos-metadata[1941]: Feb 09 19:02:53.900 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 9 19:02:53.901469 coreos-metadata[1941]: Feb 09 19:02:53.901 INFO Fetch successful Feb 9 19:02:53.901567 coreos-metadata[1941]: Feb 09 19:02:53.901 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 9 19:02:53.902266 coreos-metadata[1941]: Feb 09 19:02:53.902 INFO Fetch successful Feb 9 19:02:53.914864 systemd[1]: Finished coreos-metadata.service. Feb 9 19:02:54.397174 systemd[1]: Stopped kubelet.service. Feb 9 19:02:54.417637 systemd[1]: Reloading. Feb 9 19:02:54.521176 /usr/lib/systemd/system-generators/torcx-generator[1998]: time="2024-02-09T19:02:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:54.521424 /usr/lib/systemd/system-generators/torcx-generator[1998]: time="2024-02-09T19:02:54Z" level=info msg="torcx already run" Feb 9 19:02:54.623508 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:54.623532 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:54.647870 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:54.776015 systemd[1]: Started kubelet.service. Feb 9 19:02:54.857320 kubelet[2051]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:02:54.857836 kubelet[2051]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:54.857969 kubelet[2051]: I0209 19:02:54.857939 2051 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:02:54.859941 kubelet[2051]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:02:54.860115 kubelet[2051]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:55.295414 kubelet[2051]: I0209 19:02:55.295386 2051 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:02:55.295561 kubelet[2051]: I0209 19:02:55.295551 2051 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:02:55.295916 kubelet[2051]: I0209 19:02:55.295900 2051 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:02:55.298438 kubelet[2051]: I0209 19:02:55.298413 2051 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:02:55.302829 kubelet[2051]: I0209 19:02:55.302793 2051 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:02:55.303069 kubelet[2051]: I0209 19:02:55.303050 2051 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:02:55.303203 kubelet[2051]: I0209 19:02:55.303185 2051 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:02:55.303338 kubelet[2051]: I0209 19:02:55.303218 2051 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:02:55.303338 kubelet[2051]: I0209 19:02:55.303234 2051 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:02:55.303620 kubelet[2051]: I0209 19:02:55.303354 2051 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:55.308324 kubelet[2051]: I0209 19:02:55.308298 2051 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:02:55.308324 kubelet[2051]: I0209 19:02:55.308326 2051 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:02:55.308491 kubelet[2051]: I0209 19:02:55.308353 2051 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:02:55.308491 kubelet[2051]: I0209 19:02:55.308371 2051 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:02:55.308831 kubelet[2051]: E0209 19:02:55.308815 2051 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:02:55.309058 kubelet[2051]: E0209 19:02:55.309044 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:02:55.309524 kubelet[2051]: I0209 19:02:55.309506 2051 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:02:55.310167 kubelet[2051]: W0209 19:02:55.310146 2051 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:02:55.310634 kubelet[2051]: I0209 19:02:55.310616 2051 server.go:1186] "Started kubelet" Feb 9 19:02:55.311657 kubelet[2051]: I0209 19:02:55.311642 2051 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:02:55.312811 kubelet[2051]: I0209 19:02:55.312795 2051 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:02:55.315103 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:02:55.315440 kubelet[2051]: I0209 19:02:55.315365 2051 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:02:55.316723 kubelet[2051]: E0209 19:02:55.316505 2051 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:02:55.316806 kubelet[2051]: E0209 19:02:55.316748 2051 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:02:55.321764 kubelet[2051]: I0209 19:02:55.321236 2051 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:02:55.321764 kubelet[2051]: I0209 19:02:55.321333 2051 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:02:55.341169 kubelet[2051]: E0209 19:02:55.341135 2051 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.23.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:02:55.341409 kubelet[2051]: W0209 19:02:55.341388 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:02:55.341506 kubelet[2051]: E0209 19:02:55.341469 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:02:55.341589 kubelet[2051]: W0209 19:02:55.341568 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:02:55.341636 kubelet[2051]: E0209 19:02:55.341598 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:02:55.341759 kubelet[2051]: E0209 19:02:55.341631 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b2471793752bb3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 310588851, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 310588851, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.345254 kubelet[2051]: W0209 19:02:55.343528 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:02:55.345254 kubelet[2051]: E0209 19:02:55.343565 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:02:55.345254 kubelet[2051]: E0209 19:02:55.344812 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b2471793d2e2c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 316730565, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 316730565, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.376601 kubelet[2051]: I0209 19:02:55.376574 2051 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:02:55.376601 kubelet[2051]: I0209 19:02:55.376599 2051 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:02:55.376793 kubelet[2051]: I0209 19:02:55.376616 2051 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:55.379073 kubelet[2051]: I0209 19:02:55.379044 2051 policy_none.go:49] "None policy: Start" Feb 9 19:02:55.383849 kubelet[2051]: E0209 19:02:55.379404 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.384846 kubelet[2051]: I0209 19:02:55.384828 2051 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:02:55.384992 kubelet[2051]: I0209 19:02:55.384982 2051 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:02:55.387526 kubelet[2051]: E0209 19:02:55.387419 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.395178 kubelet[2051]: E0209 19:02:55.394962 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.396884 systemd[1]: Created slice kubepods.slice. Feb 9 19:02:55.415144 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:02:55.423633 kubelet[2051]: I0209 19:02:55.422414 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:02:55.423432 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:02:55.424417 kubelet[2051]: E0209 19:02:55.424303 2051 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.81" Feb 9 19:02:55.425402 kubelet[2051]: E0209 19:02:55.425275 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 422367613, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750bfdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.428457 kubelet[2051]: E0209 19:02:55.428261 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 422373997, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750d161" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.430310 kubelet[2051]: E0209 19:02:55.430101 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 422378695, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750dba3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.433316 kubelet[2051]: I0209 19:02:55.433235 2051 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:02:55.433884 kubelet[2051]: I0209 19:02:55.433717 2051 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:02:55.434799 kubelet[2051]: E0209 19:02:55.434776 2051 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.81\" not found" Feb 9 19:02:55.438875 kubelet[2051]: E0209 19:02:55.438796 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179af2cf0d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 436263181, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 436263181, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.543760 kubelet[2051]: E0209 19:02:55.543726 2051 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.23.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:02:55.581312 kubelet[2051]: I0209 19:02:55.576932 2051 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:02:55.617049 kubelet[2051]: I0209 19:02:55.617019 2051 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:02:55.617049 kubelet[2051]: I0209 19:02:55.617044 2051 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:02:55.617049 kubelet[2051]: I0209 19:02:55.617069 2051 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:02:55.617274 kubelet[2051]: E0209 19:02:55.617116 2051 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:02:55.620313 kubelet[2051]: W0209 19:02:55.620284 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:02:55.620561 kubelet[2051]: E0209 19:02:55.620338 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:02:55.626106 kubelet[2051]: I0209 19:02:55.626078 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:02:55.628450 kubelet[2051]: E0209 19:02:55.628426 2051 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.81" Feb 9 19:02:55.628890 kubelet[2051]: E0209 19:02:55.628811 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 626046477, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750bfdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.630454 kubelet[2051]: E0209 19:02:55.630190 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 626051896, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750d161" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.713419 kubelet[2051]: E0209 19:02:55.713324 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 626054757, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750dba3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:55.720488 amazon-ssm-agent[1661]: 2024-02-09 19:02:55 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:02:55.946172 kubelet[2051]: E0209 19:02:55.946067 2051 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.23.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:02:56.030047 kubelet[2051]: I0209 19:02:56.030020 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:02:56.031932 kubelet[2051]: E0209 19:02:56.031905 2051 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.81" Feb 9 19:02:56.033653 kubelet[2051]: E0209 19:02:56.033559 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 56, 29924523, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750bfdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:56.113142 kubelet[2051]: E0209 19:02:56.113047 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 56, 29935854, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750d161" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:56.264599 kubelet[2051]: W0209 19:02:56.264482 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:02:56.264599 kubelet[2051]: E0209 19:02:56.264519 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:02:56.310089 kubelet[2051]: E0209 19:02:56.310050 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:02:56.313124 kubelet[2051]: E0209 19:02:56.313021 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 56, 29983962, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750dba3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:56.748605 kubelet[2051]: E0209 19:02:56.748505 2051 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.23.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:02:56.821467 kubelet[2051]: W0209 19:02:56.821437 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:02:56.821467 kubelet[2051]: E0209 19:02:56.821474 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:02:56.823297 kubelet[2051]: W0209 19:02:56.823218 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:02:56.823297 kubelet[2051]: E0209 19:02:56.823298 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:02:56.833475 kubelet[2051]: I0209 19:02:56.833438 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:02:56.834814 kubelet[2051]: E0209 19:02:56.834780 2051 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.81" Feb 9 19:02:56.835883 kubelet[2051]: E0209 19:02:56.835795 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 56, 833380083, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750bfdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:56.838633 kubelet[2051]: E0209 19:02:56.838437 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 56, 833386174, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750d161" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:56.847775 kubelet[2051]: W0209 19:02:56.847688 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:02:56.847918 kubelet[2051]: E0209 19:02:56.847792 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:02:56.913771 kubelet[2051]: E0209 19:02:56.913646 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 56, 833388936, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750dba3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:57.310975 kubelet[2051]: E0209 19:02:57.310920 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:02:58.311943 kubelet[2051]: E0209 19:02:58.311888 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:02:58.351184 kubelet[2051]: E0209 19:02:58.351140 2051 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.23.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:02:58.436525 kubelet[2051]: I0209 19:02:58.436488 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:02:58.439890 kubelet[2051]: E0209 19:02:58.439857 2051 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.81" Feb 9 19:02:58.440153 kubelet[2051]: E0209 19:02:58.440075 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 58, 436437727, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750bfdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:58.441291 kubelet[2051]: E0209 19:02:58.441221 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 58, 436455093, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750d161" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:58.442336 kubelet[2051]: E0209 19:02:58.442272 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 58, 436458889, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750dba3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:02:59.312286 kubelet[2051]: E0209 19:02:59.312235 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:02:59.328428 kubelet[2051]: W0209 19:02:59.328395 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:02:59.328428 kubelet[2051]: E0209 19:02:59.328430 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:02:59.376820 kubelet[2051]: W0209 19:02:59.376787 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:02:59.376820 kubelet[2051]: E0209 19:02:59.376823 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:02:59.564410 kubelet[2051]: W0209 19:02:59.564309 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:02:59.564410 kubelet[2051]: E0209 19:02:59.564346 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:02:59.944108 kubelet[2051]: W0209 19:02:59.944008 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:02:59.944108 kubelet[2051]: E0209 19:02:59.944044 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:03:00.313027 kubelet[2051]: E0209 19:03:00.312977 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:01.313969 kubelet[2051]: E0209 19:03:01.313920 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:01.554550 kubelet[2051]: E0209 19:03:01.554504 2051 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.23.81" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:03:01.641246 kubelet[2051]: I0209 19:03:01.641131 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:03:01.643966 kubelet[2051]: E0209 19:03:01.643913 2051 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.81" Feb 9 19:03:01.643966 kubelet[2051]: E0209 19:03:01.643859 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750bfdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.81 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375310815, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 1, 641073740, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750bfdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:03:01.645359 kubelet[2051]: E0209 19:03:01.645269 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750d161", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.81 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375315297, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 1, 641091719, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750d161" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:03:01.646636 kubelet[2051]: E0209 19:03:01.646538 2051 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.81.17b247179750dba3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.81", UID:"172.31.23.81", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.81 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.81"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 55, 375317923, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 1, 641095551, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.81.17b247179750dba3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:03:02.314116 kubelet[2051]: E0209 19:03:02.314059 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:03.214214 kubelet[2051]: W0209 19:03:03.214168 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:03:03.214214 kubelet[2051]: E0209 19:03:03.214207 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:03:03.282119 kubelet[2051]: W0209 19:03:03.282088 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:03:03.282119 kubelet[2051]: E0209 19:03:03.282122 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:03:03.314714 kubelet[2051]: E0209 19:03:03.314624 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:04.262166 kubelet[2051]: W0209 19:03:04.262126 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:03:04.262166 kubelet[2051]: E0209 19:03:04.262162 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:03:04.314865 kubelet[2051]: E0209 19:03:04.314809 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:04.991774 kubelet[2051]: W0209 19:03:04.991735 2051 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:03:04.991774 kubelet[2051]: E0209 19:03:04.991774 2051 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.81" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:03:05.297917 kubelet[2051]: I0209 19:03:05.297741 2051 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:03:05.315266 kubelet[2051]: E0209 19:03:05.315171 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:05.435049 kubelet[2051]: E0209 19:03:05.435015 2051 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.81\" not found" Feb 9 19:03:05.707145 kubelet[2051]: E0209 19:03:05.707107 2051 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.23.81" not found Feb 9 19:03:06.315387 kubelet[2051]: E0209 19:03:06.315343 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:06.744695 kubelet[2051]: E0209 19:03:06.744484 2051 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.23.81" not found Feb 9 19:03:07.316238 kubelet[2051]: E0209 19:03:07.316196 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:07.960291 kubelet[2051]: E0209 19:03:07.960257 2051 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.23.81\" not found" node="172.31.23.81" Feb 9 19:03:08.045926 kubelet[2051]: I0209 19:03:08.045893 2051 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.81" Feb 9 19:03:08.146430 kubelet[2051]: I0209 19:03:08.146396 2051 kubelet_node_status.go:73] "Successfully registered node" node="172.31.23.81" Feb 9 19:03:08.161390 kubelet[2051]: E0209 19:03:08.161346 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.262315 kubelet[2051]: E0209 19:03:08.262175 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.293578 sudo[1851]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:08.318909 kubelet[2051]: E0209 19:03:08.317204 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:08.317694 sshd[1848]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:08.322643 systemd[1]: sshd@4-172.31.23.81:22-139.178.68.195:53662.service: Deactivated successfully. Feb 9 19:03:08.323990 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:03:08.324967 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:03:08.326044 systemd-logind[1612]: Removed session 5. Feb 9 19:03:08.362687 kubelet[2051]: E0209 19:03:08.362627 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.463494 kubelet[2051]: E0209 19:03:08.463445 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.563967 kubelet[2051]: E0209 19:03:08.563922 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.665063 kubelet[2051]: E0209 19:03:08.665013 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.765414 kubelet[2051]: E0209 19:03:08.765372 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.866220 kubelet[2051]: E0209 19:03:08.866081 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:08.966806 kubelet[2051]: E0209 19:03:08.966763 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.067456 kubelet[2051]: E0209 19:03:09.067408 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.168243 kubelet[2051]: E0209 19:03:09.168134 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.269129 kubelet[2051]: E0209 19:03:09.269016 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.317962 kubelet[2051]: E0209 19:03:09.317911 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:09.369282 kubelet[2051]: E0209 19:03:09.369234 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.470247 kubelet[2051]: E0209 19:03:09.470130 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.571113 kubelet[2051]: E0209 19:03:09.571069 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.672225 kubelet[2051]: E0209 19:03:09.672181 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.772384 kubelet[2051]: E0209 19:03:09.772267 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.873400 kubelet[2051]: E0209 19:03:09.873358 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:09.974058 kubelet[2051]: E0209 19:03:09.974017 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.074851 kubelet[2051]: E0209 19:03:10.074729 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.176014 kubelet[2051]: E0209 19:03:10.175963 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.276863 kubelet[2051]: E0209 19:03:10.276820 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.318496 kubelet[2051]: E0209 19:03:10.318445 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:10.378087 kubelet[2051]: E0209 19:03:10.377908 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.478782 kubelet[2051]: E0209 19:03:10.478733 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.579535 kubelet[2051]: E0209 19:03:10.579488 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.680706 kubelet[2051]: E0209 19:03:10.680584 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.781618 kubelet[2051]: E0209 19:03:10.781571 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.881715 kubelet[2051]: E0209 19:03:10.881655 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:10.982440 kubelet[2051]: E0209 19:03:10.982196 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.082750 kubelet[2051]: E0209 19:03:11.082709 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.183340 kubelet[2051]: E0209 19:03:11.183298 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.283994 kubelet[2051]: E0209 19:03:11.283884 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.319526 kubelet[2051]: E0209 19:03:11.319484 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:11.384052 kubelet[2051]: E0209 19:03:11.384007 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.484908 kubelet[2051]: E0209 19:03:11.484867 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.585584 kubelet[2051]: E0209 19:03:11.585540 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.686543 kubelet[2051]: E0209 19:03:11.686500 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.787172 kubelet[2051]: E0209 19:03:11.787127 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.888104 kubelet[2051]: E0209 19:03:11.887991 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:11.989405 kubelet[2051]: E0209 19:03:11.989240 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.089770 kubelet[2051]: E0209 19:03:12.089724 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.190492 kubelet[2051]: E0209 19:03:12.190379 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.291177 kubelet[2051]: E0209 19:03:12.291132 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.319807 kubelet[2051]: E0209 19:03:12.319751 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:12.391381 kubelet[2051]: E0209 19:03:12.391341 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.492322 kubelet[2051]: E0209 19:03:12.492204 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.593048 kubelet[2051]: E0209 19:03:12.592991 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.693219 kubelet[2051]: E0209 19:03:12.693171 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.793977 kubelet[2051]: E0209 19:03:12.793871 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.894743 kubelet[2051]: E0209 19:03:12.894701 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:12.995940 kubelet[2051]: E0209 19:03:12.995801 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.096581 kubelet[2051]: E0209 19:03:13.096529 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.197635 kubelet[2051]: E0209 19:03:13.197583 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.298257 kubelet[2051]: E0209 19:03:13.298207 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.320729 kubelet[2051]: E0209 19:03:13.320685 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:13.399345 kubelet[2051]: E0209 19:03:13.399231 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.500066 kubelet[2051]: E0209 19:03:13.500019 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.600752 kubelet[2051]: E0209 19:03:13.600707 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.700961 kubelet[2051]: E0209 19:03:13.700842 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.801616 kubelet[2051]: E0209 19:03:13.801570 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:13.902254 kubelet[2051]: E0209 19:03:13.902207 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.002940 kubelet[2051]: E0209 19:03:14.002833 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.103540 kubelet[2051]: E0209 19:03:14.103495 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.204197 kubelet[2051]: E0209 19:03:14.204145 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.305191 kubelet[2051]: E0209 19:03:14.305146 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.321578 kubelet[2051]: E0209 19:03:14.321539 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:14.405361 kubelet[2051]: E0209 19:03:14.405320 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.506225 kubelet[2051]: E0209 19:03:14.506179 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.607009 kubelet[2051]: E0209 19:03:14.606893 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.707757 kubelet[2051]: E0209 19:03:14.707649 2051 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.81\" not found" Feb 9 19:03:14.808927 kubelet[2051]: I0209 19:03:14.808901 2051 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:03:14.809392 env[1623]: time="2024-02-09T19:03:14.809348059Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:03:14.809827 kubelet[2051]: I0209 19:03:14.809555 2051 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:03:15.308528 kubelet[2051]: E0209 19:03:15.308478 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:15.320915 kubelet[2051]: I0209 19:03:15.320629 2051 apiserver.go:52] "Watching apiserver" Feb 9 19:03:15.321715 kubelet[2051]: E0209 19:03:15.321695 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:15.328111 kubelet[2051]: I0209 19:03:15.328075 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:15.328515 kubelet[2051]: I0209 19:03:15.328160 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:15.337963 systemd[1]: Created slice kubepods-burstable-pod46d72fcf_c8be_4e68_9f39_c8734b29680f.slice. Feb 9 19:03:15.350160 systemd[1]: Created slice kubepods-besteffort-pod1ce1627d_fb08_476a_b2df_183aea6b628f.slice. Feb 9 19:03:15.422190 kubelet[2051]: I0209 19:03:15.422158 2051 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:03:15.455037 kubelet[2051]: I0209 19:03:15.454946 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ce1627d-fb08-476a-b2df-183aea6b628f-xtables-lock\") pod \"kube-proxy-tz885\" (UID: \"1ce1627d-fb08-476a-b2df-183aea6b628f\") " pod="kube-system/kube-proxy-tz885" Feb 9 19:03:15.455331 kubelet[2051]: I0209 19:03:15.455110 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ce1627d-fb08-476a-b2df-183aea6b628f-lib-modules\") pod \"kube-proxy-tz885\" (UID: \"1ce1627d-fb08-476a-b2df-183aea6b628f\") " pod="kube-system/kube-proxy-tz885" Feb 9 19:03:15.455331 kubelet[2051]: I0209 19:03:15.455159 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-hostproc\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455331 kubelet[2051]: I0209 19:03:15.455206 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cni-path\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455331 kubelet[2051]: I0209 19:03:15.455302 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-etc-cni-netd\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455546 kubelet[2051]: I0209 19:03:15.455338 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46d72fcf-c8be-4e68-9f39-c8734b29680f-clustermesh-secrets\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455546 kubelet[2051]: I0209 19:03:15.455369 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-net\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455546 kubelet[2051]: I0209 19:03:15.455406 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-hubble-tls\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455546 kubelet[2051]: I0209 19:03:15.455437 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ce1627d-fb08-476a-b2df-183aea6b628f-kube-proxy\") pod \"kube-proxy-tz885\" (UID: \"1ce1627d-fb08-476a-b2df-183aea6b628f\") " pod="kube-system/kube-proxy-tz885" Feb 9 19:03:15.455546 kubelet[2051]: I0209 19:03:15.455469 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ssg\" (UniqueName: \"kubernetes.io/projected/1ce1627d-fb08-476a-b2df-183aea6b628f-kube-api-access-h2ssg\") pod \"kube-proxy-tz885\" (UID: \"1ce1627d-fb08-476a-b2df-183aea6b628f\") " pod="kube-system/kube-proxy-tz885" Feb 9 19:03:15.455789 kubelet[2051]: I0209 19:03:15.455501 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-run\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455789 kubelet[2051]: I0209 19:03:15.455531 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-bpf-maps\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455789 kubelet[2051]: I0209 19:03:15.455560 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-xtables-lock\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455789 kubelet[2051]: I0209 19:03:15.455592 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-kernel\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455789 kubelet[2051]: I0209 19:03:15.455634 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96v8\" (UniqueName: \"kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-kube-api-access-g96v8\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.455789 kubelet[2051]: I0209 19:03:15.455680 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-lib-modules\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.456033 kubelet[2051]: I0209 19:03:15.455725 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-config-path\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.456033 kubelet[2051]: I0209 19:03:15.455775 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-cgroup\") pod \"cilium-qqbmq\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " pod="kube-system/cilium-qqbmq" Feb 9 19:03:15.456033 kubelet[2051]: I0209 19:03:15.455790 2051 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:03:15.659722 env[1623]: time="2024-02-09T19:03:15.658593273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tz885,Uid:1ce1627d-fb08-476a-b2df-183aea6b628f,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:15.949347 env[1623]: time="2024-02-09T19:03:15.949224178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqbmq,Uid:46d72fcf-c8be-4e68-9f39-c8734b29680f,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:16.237207 env[1623]: time="2024-02-09T19:03:16.236645556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.238217 env[1623]: time="2024-02-09T19:03:16.238181696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.248008 env[1623]: time="2024-02-09T19:03:16.247962165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.249068 env[1623]: time="2024-02-09T19:03:16.249032067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.250381 env[1623]: time="2024-02-09T19:03:16.250351513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.253747 env[1623]: time="2024-02-09T19:03:16.253716734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.254722 env[1623]: time="2024-02-09T19:03:16.254693586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.257552 env[1623]: time="2024-02-09T19:03:16.257523273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:16.288583 env[1623]: time="2024-02-09T19:03:16.288510362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:16.288892 env[1623]: time="2024-02-09T19:03:16.288595801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:16.288892 env[1623]: time="2024-02-09T19:03:16.288769055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:16.289000 env[1623]: time="2024-02-09T19:03:16.288935045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54c1f86632c102e66199238804161d24107dbfbd9260a282ec55652e355f2e92 pid=2142 runtime=io.containerd.runc.v2 Feb 9 19:03:16.291612 env[1623]: time="2024-02-09T19:03:16.291019078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:16.291974 env[1623]: time="2024-02-09T19:03:16.291925457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:16.291974 env[1623]: time="2024-02-09T19:03:16.291950900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:16.293009 env[1623]: time="2024-02-09T19:03:16.292713188Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9 pid=2153 runtime=io.containerd.runc.v2 Feb 9 19:03:16.324868 systemd[1]: Started cri-containerd-54c1f86632c102e66199238804161d24107dbfbd9260a282ec55652e355f2e92.scope. Feb 9 19:03:16.334698 kubelet[2051]: E0209 19:03:16.328659 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:16.342164 systemd[1]: Started cri-containerd-6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9.scope. Feb 9 19:03:16.411884 env[1623]: time="2024-02-09T19:03:16.411839519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tz885,Uid:1ce1627d-fb08-476a-b2df-183aea6b628f,Namespace:kube-system,Attempt:0,} returns sandbox id \"54c1f86632c102e66199238804161d24107dbfbd9260a282ec55652e355f2e92\"" Feb 9 19:03:16.416751 env[1623]: time="2024-02-09T19:03:16.415001203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqbmq,Uid:46d72fcf-c8be-4e68-9f39-c8734b29680f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\"" Feb 9 19:03:16.417876 env[1623]: time="2024-02-09T19:03:16.417828926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:03:16.573094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294810742.mount: Deactivated successfully. Feb 9 19:03:17.328871 kubelet[2051]: E0209 19:03:17.328795 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:17.647183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579504389.mount: Deactivated successfully. Feb 9 19:03:18.329175 kubelet[2051]: E0209 19:03:18.329108 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:18.348090 env[1623]: time="2024-02-09T19:03:18.348045416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:18.350895 env[1623]: time="2024-02-09T19:03:18.350859109Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:18.353210 env[1623]: time="2024-02-09T19:03:18.353125415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:18.355971 env[1623]: time="2024-02-09T19:03:18.355931007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:18.356808 env[1623]: time="2024-02-09T19:03:18.356773174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:03:18.360065 env[1623]: time="2024-02-09T19:03:18.359100124Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:03:18.360440 env[1623]: time="2024-02-09T19:03:18.360403693Z" level=info msg="CreateContainer within sandbox \"54c1f86632c102e66199238804161d24107dbfbd9260a282ec55652e355f2e92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:03:18.380868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838428986.mount: Deactivated successfully. Feb 9 19:03:18.391446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994886470.mount: Deactivated successfully. Feb 9 19:03:18.399420 env[1623]: time="2024-02-09T19:03:18.399369418Z" level=info msg="CreateContainer within sandbox \"54c1f86632c102e66199238804161d24107dbfbd9260a282ec55652e355f2e92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f03deb72dbe6bdd0935e58e6ed883cd262f9b8b8be0f869305a2fc771dd99eac\"" Feb 9 19:03:18.400605 env[1623]: time="2024-02-09T19:03:18.400576443Z" level=info msg="StartContainer for \"f03deb72dbe6bdd0935e58e6ed883cd262f9b8b8be0f869305a2fc771dd99eac\"" Feb 9 19:03:18.438409 systemd[1]: Started cri-containerd-f03deb72dbe6bdd0935e58e6ed883cd262f9b8b8be0f869305a2fc771dd99eac.scope. Feb 9 19:03:18.507351 env[1623]: time="2024-02-09T19:03:18.507288967Z" level=info msg="StartContainer for \"f03deb72dbe6bdd0935e58e6ed883cd262f9b8b8be0f869305a2fc771dd99eac\" returns successfully" Feb 9 19:03:18.698321 kubelet[2051]: I0209 19:03:18.698216 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tz885" podStartSLOduration=-9.22337202615662e+09 pod.CreationTimestamp="2024-02-09 19:03:08 +0000 UTC" firstStartedPulling="2024-02-09 19:03:16.416600218 +0000 UTC m=+21.632171519" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:18.697989281 +0000 UTC m=+23.913560604" watchObservedRunningTime="2024-02-09 19:03:18.698156221 +0000 UTC m=+23.913727538" Feb 9 19:03:19.331210 kubelet[2051]: E0209 19:03:19.331166 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:19.415285 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:03:19.705566 env[1623]: time="2024-02-09T19:03:19.705348812Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240209%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240209T190319Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=ec5ed92a67daca84c69ebe4da039b29f7aad9633e0d8c0f14bae9d0bd3421ff7&cf_sign=rf9qKvjJvwnm0MMnr4fEwbhDytM8%2FO%2FARfw5YhGnAZfBbGO%2BT79qDdWqJA3g0qGdaKrEuE68FhdrfqD8EOqbIt2shHtOO1G7dTAGKXAznoK9DWJavveONWcy4B5BTZv9EM0gQCcShh1lD3RxtWwY8XH4fBnH3Bk4IQV4WNksaybp5gBI41%2BMRopQDxe1aSVf15fXu1UvSJHtox6EB9mGPR9gVM32v0QgIO0Y2WkfS4KQOy%2BXEUXmpqrp0nnQo%2FFsw1sLvPc46Uh5ypYt%2FdqcG6%2BFUZKUcG6jbggND1E%2BchY2Kt9yYIF%2Blj6weuhGu5ElHoDM%2FubvW6I5MIjfQYJN3w%3D%3D&cf_expiry=1707505999®ion=us-east-1&namespace=cilium\": dial tcp: lookup cdn03.quay.io: no such host" Feb 9 19:03:19.706326 kubelet[2051]: E0209 19:03:19.706260 2051 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240209%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240209T190319Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=ec5ed92a67daca84c69ebe4da039b29f7aad9633e0d8c0f14bae9d0bd3421ff7&cf_sign=rf9qKvjJvwnm0MMnr4fEwbhDytM8%2FO%2FARfw5YhGnAZfBbGO%2BT79qDdWqJA3g0qGdaKrEuE68FhdrfqD8EOqbIt2shHtOO1G7dTAGKXAznoK9DWJavveONWcy4B5BTZv9EM0gQCcShh1lD3RxtWwY8XH4fBnH3Bk4IQV4WNksaybp5gBI41%2BMRopQDxe1aSVf15fXu1UvSJHtox6EB9mGPR9gVM32v0QgIO0Y2WkfS4KQOy%2BXEUXmpqrp0nnQo%2FFsw1sLvPc46Uh5ypYt%2FdqcG6%2BFUZKUcG6jbggND1E%2BchY2Kt9yYIF%2Blj6weuhGu5ElHoDM%2FubvW6I5MIjfQYJN3w%3D%3D&cf_expiry=1707505999®ion=us-east-1&namespace=cilium\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 9 19:03:19.706494 kubelet[2051]: E0209 19:03:19.706342 2051 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240209%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240209T190319Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=ec5ed92a67daca84c69ebe4da039b29f7aad9633e0d8c0f14bae9d0bd3421ff7&cf_sign=rf9qKvjJvwnm0MMnr4fEwbhDytM8%2FO%2FARfw5YhGnAZfBbGO%2BT79qDdWqJA3g0qGdaKrEuE68FhdrfqD8EOqbIt2shHtOO1G7dTAGKXAznoK9DWJavveONWcy4B5BTZv9EM0gQCcShh1lD3RxtWwY8XH4fBnH3Bk4IQV4WNksaybp5gBI41%2BMRopQDxe1aSVf15fXu1UvSJHtox6EB9mGPR9gVM32v0QgIO0Y2WkfS4KQOy%2BXEUXmpqrp0nnQo%2FFsw1sLvPc46Uh5ypYt%2FdqcG6%2BFUZKUcG6jbggND1E%2BchY2Kt9yYIF%2Blj6weuhGu5ElHoDM%2FubvW6I5MIjfQYJN3w%3D%3D&cf_expiry=1707505999®ion=us-east-1&namespace=cilium\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 9 19:03:19.706657 kubelet[2051]: E0209 19:03:19.706619 2051 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:03:19.706657 kubelet[2051]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:03:19.706657 kubelet[2051]: rm /hostbin/cilium-mount Feb 9 19:03:19.710847 kubelet[2051]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-g96v8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:unconfined_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-qqbmq_kube-system(46d72fcf-c8be-4e68-9f39-c8734b29680f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://cdn03.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240209%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240209T190319Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=ec5ed92a67daca84c69ebe4da039b29f7aad9633e0d8c0f14bae9d0bd3421ff7&cf_sign=rf9qKvjJvwnm0MMnr4fEwbhDytM8%2FO%2FARfw5YhGnAZfBbGO%2BT79qDdWqJA3g0qGdaKrEuE68FhdrfqD8EOqbIt2shHtOO1G7dTAGKXAznoK9DWJavveONWcy4B5BTZv9EM0gQCcShh1lD3RxtWwY8XH4fBnH3Bk4IQV4WNksaybp5gBI41%2BMRopQDxe1aSVf15fXu1UvSJHtox6EB9mGPR9gVM32v0QgIO0Y2WkfS4KQOy%2BXEUXmpqrp0nnQo%2FFsw1sLvPc46Uh5ypYt%2FdqcG6%2BFUZKUcG6jbggND1E%2BchY2Kt9yYIF%2Blj6weuhGu5ElHoDM%2FubvW6I5MIjfQYJN3w%3D%3D&cf_expiry=1707505999®ion=us-east-1&namespace=cilium": dial tcp: lookup cdn03.quay.io: no such host Feb 9 19:03:19.710847 kubelet[2051]: E0209 19:03:19.706863 2051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn03.quay.io/quayio-production-s3/sha256/c4/c46932a78ea5a22a2cf94df8f536a022d6641fc90a5cc3517c5aa4c45db585d3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240209%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240209T190319Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=ec5ed92a67daca84c69ebe4da039b29f7aad9633e0d8c0f14bae9d0bd3421ff7&cf_sign=rf9qKvjJvwnm0MMnr4fEwbhDytM8%2FO%2FARfw5YhGnAZfBbGO%2BT79qDdWqJA3g0qGdaKrEuE68FhdrfqD8EOqbIt2shHtOO1G7dTAGKXAznoK9DWJavveONWcy4B5BTZv9EM0gQCcShh1lD3RxtWwY8XH4fBnH3Bk4IQV4WNksaybp5gBI41%2BMRopQDxe1aSVf15fXu1UvSJHtox6EB9mGPR9gVM32v0QgIO0Y2WkfS4KQOy%2BXEUXmpqrp0nnQo%2FFsw1sLvPc46Uh5ypYt%2FdqcG6%2BFUZKUcG6jbggND1E%2BchY2Kt9yYIF%2Blj6weuhGu5ElHoDM%2FubvW6I5MIjfQYJN3w%3D%3D&cf_expiry=1707505999®ion=us-east-1&namespace=cilium\\\": dial tcp: lookup cdn03.quay.io: no such host\"" pod="kube-system/cilium-qqbmq" podUID=46d72fcf-c8be-4e68-9f39-c8734b29680f Feb 9 19:03:20.331818 kubelet[2051]: E0209 19:03:20.331773 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:20.682051 kubelet[2051]: E0209 19:03:20.681889 2051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-qqbmq" podUID=46d72fcf-c8be-4e68-9f39-c8734b29680f Feb 9 19:03:21.332652 kubelet[2051]: E0209 19:03:21.332614 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:22.333326 kubelet[2051]: E0209 19:03:22.333282 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:23.334482 kubelet[2051]: E0209 19:03:23.334438 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:24.336838 kubelet[2051]: E0209 19:03:24.336789 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:25.337595 kubelet[2051]: E0209 19:03:25.337543 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:25.747874 amazon-ssm-agent[1661]: 2024-02-09 19:03:25 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:03:26.338324 kubelet[2051]: E0209 19:03:26.338277 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:27.338728 kubelet[2051]: E0209 19:03:27.338681 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:28.338968 kubelet[2051]: E0209 19:03:28.338924 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:29.340115 kubelet[2051]: E0209 19:03:29.340075 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:30.340807 kubelet[2051]: E0209 19:03:30.340769 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:31.341535 kubelet[2051]: E0209 19:03:31.341492 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:32.341690 kubelet[2051]: E0209 19:03:32.341629 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:33.342770 kubelet[2051]: E0209 19:03:33.342721 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:33.971692 update_engine[1613]: I0209 19:03:33.971613 1613 update_attempter.cc:509] Updating boot flags... Feb 9 19:03:34.343123 kubelet[2051]: E0209 19:03:34.343068 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:34.620054 env[1623]: time="2024-02-09T19:03:34.619505782Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:03:35.308594 kubelet[2051]: E0209 19:03:35.308522 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:35.344121 kubelet[2051]: E0209 19:03:35.344085 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:36.345254 kubelet[2051]: E0209 19:03:36.345215 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:37.346510 kubelet[2051]: E0209 19:03:37.346246 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:38.347347 kubelet[2051]: E0209 19:03:38.347254 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:39.348366 kubelet[2051]: E0209 19:03:39.348281 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:40.349006 kubelet[2051]: E0209 19:03:40.348904 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:41.350019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount850808789.mount: Deactivated successfully. Feb 9 19:03:41.350836 kubelet[2051]: E0209 19:03:41.350279 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:42.350559 kubelet[2051]: E0209 19:03:42.350524 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:43.351384 kubelet[2051]: E0209 19:03:43.351284 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:44.351867 kubelet[2051]: E0209 19:03:44.351829 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:44.925752 env[1623]: time="2024-02-09T19:03:44.925702690Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.928709 env[1623]: time="2024-02-09T19:03:44.928650170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.932007 env[1623]: time="2024-02-09T19:03:44.931965111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:44.932888 env[1623]: time="2024-02-09T19:03:44.932852756Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:03:44.935096 env[1623]: time="2024-02-09T19:03:44.935062391Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:03:44.949149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239339181.mount: Deactivated successfully. Feb 9 19:03:44.956983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177094106.mount: Deactivated successfully. Feb 9 19:03:44.967181 env[1623]: time="2024-02-09T19:03:44.966853445Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\"" Feb 9 19:03:44.968634 env[1623]: time="2024-02-09T19:03:44.968593409Z" level=info msg="StartContainer for \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\"" Feb 9 19:03:45.007794 systemd[1]: Started cri-containerd-87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76.scope. Feb 9 19:03:45.063820 env[1623]: time="2024-02-09T19:03:45.063779253Z" level=info msg="StartContainer for \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\" returns successfully" Feb 9 19:03:45.071343 systemd[1]: cri-containerd-87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76.scope: Deactivated successfully. Feb 9 19:03:45.352755 kubelet[2051]: E0209 19:03:45.352691 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:45.589298 env[1623]: time="2024-02-09T19:03:45.589226703Z" level=info msg="shim disconnected" id=87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76 Feb 9 19:03:45.589298 env[1623]: time="2024-02-09T19:03:45.589298418Z" level=warning msg="cleaning up after shim disconnected" id=87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76 namespace=k8s.io Feb 9 19:03:45.589598 env[1623]: time="2024-02-09T19:03:45.589311016Z" level=info msg="cleaning up dead shim" Feb 9 19:03:45.599642 env[1623]: time="2024-02-09T19:03:45.599596876Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2620 runtime=io.containerd.runc.v2\n" Feb 9 19:03:45.761439 env[1623]: time="2024-02-09T19:03:45.761310422Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:03:45.790897 env[1623]: time="2024-02-09T19:03:45.790842164Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\"" Feb 9 19:03:45.793014 env[1623]: time="2024-02-09T19:03:45.792984175Z" level=info msg="StartContainer for \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\"" Feb 9 19:03:45.870300 systemd[1]: Started cri-containerd-f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798.scope. Feb 9 19:03:45.909742 env[1623]: time="2024-02-09T19:03:45.909666169Z" level=info msg="StartContainer for \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\" returns successfully" Feb 9 19:03:45.922117 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:03:45.922902 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:03:45.923102 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:03:45.924962 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:03:45.936570 systemd[1]: cri-containerd-f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798.scope: Deactivated successfully. Feb 9 19:03:45.937616 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:03:45.946632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76-rootfs.mount: Deactivated successfully. Feb 9 19:03:45.965065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798-rootfs.mount: Deactivated successfully. Feb 9 19:03:45.981107 env[1623]: time="2024-02-09T19:03:45.981054055Z" level=info msg="shim disconnected" id=f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798 Feb 9 19:03:45.981107 env[1623]: time="2024-02-09T19:03:45.981108589Z" level=warning msg="cleaning up after shim disconnected" id=f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798 namespace=k8s.io Feb 9 19:03:45.981827 env[1623]: time="2024-02-09T19:03:45.981120253Z" level=info msg="cleaning up dead shim" Feb 9 19:03:45.996930 env[1623]: time="2024-02-09T19:03:45.996871337Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2682 runtime=io.containerd.runc.v2\n" Feb 9 19:03:46.353154 kubelet[2051]: E0209 19:03:46.353076 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:46.761967 env[1623]: time="2024-02-09T19:03:46.759842109Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:03:46.785373 env[1623]: time="2024-02-09T19:03:46.785321176Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\"" Feb 9 19:03:46.786134 env[1623]: time="2024-02-09T19:03:46.786096921Z" level=info msg="StartContainer for \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\"" Feb 9 19:03:46.811434 systemd[1]: Started cri-containerd-b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a.scope. Feb 9 19:03:46.857965 systemd[1]: cri-containerd-b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a.scope: Deactivated successfully. Feb 9 19:03:46.860877 env[1623]: time="2024-02-09T19:03:46.860798428Z" level=info msg="StartContainer for \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\" returns successfully" Feb 9 19:03:46.901047 env[1623]: time="2024-02-09T19:03:46.900984163Z" level=info msg="shim disconnected" id=b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a Feb 9 19:03:46.901047 env[1623]: time="2024-02-09T19:03:46.901045874Z" level=warning msg="cleaning up after shim disconnected" id=b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a namespace=k8s.io Feb 9 19:03:46.901551 env[1623]: time="2024-02-09T19:03:46.901059024Z" level=info msg="cleaning up dead shim" Feb 9 19:03:46.912595 env[1623]: time="2024-02-09T19:03:46.912551877Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2738 runtime=io.containerd.runc.v2\n" Feb 9 19:03:46.946056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a-rootfs.mount: Deactivated successfully. Feb 9 19:03:47.354845 kubelet[2051]: E0209 19:03:47.354724 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:47.764430 env[1623]: time="2024-02-09T19:03:47.763968740Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:03:47.795350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440834651.mount: Deactivated successfully. Feb 9 19:03:47.803298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195501878.mount: Deactivated successfully. Feb 9 19:03:47.809848 env[1623]: time="2024-02-09T19:03:47.809778559Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\"" Feb 9 19:03:47.811202 env[1623]: time="2024-02-09T19:03:47.811129589Z" level=info msg="StartContainer for \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\"" Feb 9 19:03:47.839242 systemd[1]: Started cri-containerd-64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f.scope. Feb 9 19:03:47.882444 systemd[1]: cri-containerd-64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f.scope: Deactivated successfully. Feb 9 19:03:47.884623 env[1623]: time="2024-02-09T19:03:47.884580942Z" level=info msg="StartContainer for \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\" returns successfully" Feb 9 19:03:47.919982 env[1623]: time="2024-02-09T19:03:47.919934097Z" level=info msg="shim disconnected" id=64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f Feb 9 19:03:47.919982 env[1623]: time="2024-02-09T19:03:47.919980543Z" level=warning msg="cleaning up after shim disconnected" id=64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f namespace=k8s.io Feb 9 19:03:47.920660 env[1623]: time="2024-02-09T19:03:47.919992942Z" level=info msg="cleaning up dead shim" Feb 9 19:03:47.930347 env[1623]: time="2024-02-09T19:03:47.930301375Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:03:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2794 runtime=io.containerd.runc.v2\n" Feb 9 19:03:48.355102 kubelet[2051]: E0209 19:03:48.355061 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:48.766915 env[1623]: time="2024-02-09T19:03:48.766812627Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:03:48.795578 env[1623]: time="2024-02-09T19:03:48.795526589Z" level=info msg="CreateContainer within sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\"" Feb 9 19:03:48.796358 env[1623]: time="2024-02-09T19:03:48.796326959Z" level=info msg="StartContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\"" Feb 9 19:03:48.830393 systemd[1]: Started cri-containerd-136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e.scope. Feb 9 19:03:48.876269 env[1623]: time="2024-02-09T19:03:48.876063397Z" level=info msg="StartContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" returns successfully" Feb 9 19:03:49.027475 kubelet[2051]: I0209 19:03:49.027387 2051 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:03:49.356333 kubelet[2051]: E0209 19:03:49.356152 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:49.468698 kernel: Initializing XFRM netlink socket Feb 9 19:03:49.806392 kubelet[2051]: I0209 19:03:49.806351 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-qqbmq" podStartSLOduration=-9.223371995048458e+09 pod.CreationTimestamp="2024-02-09 19:03:08 +0000 UTC" firstStartedPulling="2024-02-09 19:03:16.418882524 +0000 UTC m=+21.634453836" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:49.806151443 +0000 UTC m=+55.021722767" watchObservedRunningTime="2024-02-09 19:03:49.806317003 +0000 UTC m=+55.021888326" Feb 9 19:03:50.357335 kubelet[2051]: E0209 19:03:50.357261 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:51.131210 systemd-networkd[1448]: cilium_host: Link UP Feb 9 19:03:51.131358 systemd-networkd[1448]: cilium_net: Link UP Feb 9 19:03:51.134989 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:03:51.135240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:03:51.136902 systemd-networkd[1448]: cilium_net: Gained carrier Feb 9 19:03:51.137078 (udev-worker)[2932]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:03:51.137165 systemd-networkd[1448]: cilium_host: Gained carrier Feb 9 19:03:51.139541 (udev-worker)[2931]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:03:51.325640 systemd-networkd[1448]: cilium_vxlan: Link UP Feb 9 19:03:51.325651 systemd-networkd[1448]: cilium_vxlan: Gained carrier Feb 9 19:03:51.358366 kubelet[2051]: E0209 19:03:51.358295 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:51.426008 systemd-networkd[1448]: cilium_net: Gained IPv6LL Feb 9 19:03:51.574704 kernel: NET: Registered PF_ALG protocol family Feb 9 19:03:51.729896 systemd-networkd[1448]: cilium_host: Gained IPv6LL Feb 9 19:03:52.359479 kubelet[2051]: E0209 19:03:52.359394 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:52.474227 systemd-networkd[1448]: lxc_health: Link UP Feb 9 19:03:52.476622 (udev-worker)[2951]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:03:52.492196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:03:52.491927 systemd-networkd[1448]: lxc_health: Gained carrier Feb 9 19:03:52.744355 systemd-networkd[1448]: cilium_vxlan: Gained IPv6LL Feb 9 19:03:53.359577 kubelet[2051]: E0209 19:03:53.359528 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:53.585847 systemd-networkd[1448]: lxc_health: Gained IPv6LL Feb 9 19:03:54.221087 kubelet[2051]: I0209 19:03:54.221042 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:03:54.229575 systemd[1]: Created slice kubepods-besteffort-pod41ff18b9_b1bb_4943_86be_ade2882dbdca.slice. Feb 9 19:03:54.234453 kubelet[2051]: W0209 19:03:54.234227 2051 reflector.go:424] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.23.81" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.23.81' and this object Feb 9 19:03:54.234453 kubelet[2051]: E0209 19:03:54.234447 2051 reflector.go:140] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.23.81" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.23.81' and this object Feb 9 19:03:54.359742 kubelet[2051]: E0209 19:03:54.359705 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:54.369798 kubelet[2051]: I0209 19:03:54.369757 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdcdt\" (UniqueName: \"kubernetes.io/projected/41ff18b9-b1bb-4943-86be-ade2882dbdca-kube-api-access-tdcdt\") pod \"nginx-deployment-8ffc5cf85-wb7fc\" (UID: \"41ff18b9-b1bb-4943-86be-ade2882dbdca\") " pod="default/nginx-deployment-8ffc5cf85-wb7fc" Feb 9 19:03:55.308801 kubelet[2051]: E0209 19:03:55.308759 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:55.361022 kubelet[2051]: E0209 19:03:55.360982 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:55.479250 kubelet[2051]: E0209 19:03:55.479212 2051 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:03:55.479483 kubelet[2051]: E0209 19:03:55.479464 2051 projected.go:198] Error preparing data for projected volume kube-api-access-tdcdt for pod default/nginx-deployment-8ffc5cf85-wb7fc: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:03:55.479990 kubelet[2051]: E0209 19:03:55.479946 2051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41ff18b9-b1bb-4943-86be-ade2882dbdca-kube-api-access-tdcdt podName:41ff18b9-b1bb-4943-86be-ade2882dbdca nodeName:}" failed. No retries permitted until 2024-02-09 19:03:55.979641544 +0000 UTC m=+61.195212860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tdcdt" (UniqueName: "kubernetes.io/projected/41ff18b9-b1bb-4943-86be-ade2882dbdca-kube-api-access-tdcdt") pod "nginx-deployment-8ffc5cf85-wb7fc" (UID: "41ff18b9-b1bb-4943-86be-ade2882dbdca") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:03:56.038629 env[1623]: time="2024-02-09T19:03:56.038121512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-wb7fc,Uid:41ff18b9-b1bb-4943-86be-ade2882dbdca,Namespace:default,Attempt:0,}" Feb 9 19:03:56.113119 systemd-networkd[1448]: lxcc428733c10ca: Link UP Feb 9 19:03:56.118838 (udev-worker)[3296]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:03:56.127810 kernel: eth0: renamed from tmp7c88c Feb 9 19:03:56.139923 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:03:56.140048 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc428733c10ca: link becomes ready Feb 9 19:03:56.139919 systemd-networkd[1448]: lxcc428733c10ca: Gained carrier Feb 9 19:03:56.361893 kubelet[2051]: E0209 19:03:56.361848 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:57.362728 kubelet[2051]: E0209 19:03:57.362687 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:57.937852 systemd-networkd[1448]: lxcc428733c10ca: Gained IPv6LL Feb 9 19:03:58.364380 kubelet[2051]: E0209 19:03:58.364293 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:03:59.131329 env[1623]: time="2024-02-09T19:03:59.131081356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:59.131803 env[1623]: time="2024-02-09T19:03:59.131746445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:59.131803 env[1623]: time="2024-02-09T19:03:59.131774177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:59.132276 env[1623]: time="2024-02-09T19:03:59.132058119Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c88c47ca4e97bd5829abed9364afaae3d5c4dd8030eb8a257296d655acaa2e9 pid=3316 runtime=io.containerd.runc.v2 Feb 9 19:03:59.175502 systemd[1]: run-containerd-runc-k8s.io-7c88c47ca4e97bd5829abed9364afaae3d5c4dd8030eb8a257296d655acaa2e9-runc.njIygB.mount: Deactivated successfully. Feb 9 19:03:59.181120 systemd[1]: Started cri-containerd-7c88c47ca4e97bd5829abed9364afaae3d5c4dd8030eb8a257296d655acaa2e9.scope. Feb 9 19:03:59.251570 env[1623]: time="2024-02-09T19:03:59.251527976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-wb7fc,Uid:41ff18b9-b1bb-4943-86be-ade2882dbdca,Namespace:default,Attempt:0,} returns sandbox id \"7c88c47ca4e97bd5829abed9364afaae3d5c4dd8030eb8a257296d655acaa2e9\"" Feb 9 19:03:59.253514 env[1623]: time="2024-02-09T19:03:59.253480484Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:03:59.364777 kubelet[2051]: E0209 19:03:59.364656 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:00.365203 kubelet[2051]: E0209 19:04:00.365154 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:01.366763 kubelet[2051]: E0209 19:04:01.366655 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:02.367661 kubelet[2051]: E0209 19:04:02.367615 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:03.367997 kubelet[2051]: E0209 19:04:03.367960 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:04.368490 kubelet[2051]: E0209 19:04:04.368396 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:05.369458 kubelet[2051]: E0209 19:04:05.369396 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:05.571114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195091103.mount: Deactivated successfully. Feb 9 19:04:06.369824 kubelet[2051]: E0209 19:04:06.369786 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:06.938090 env[1623]: time="2024-02-09T19:04:06.938038149Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:06.941004 env[1623]: time="2024-02-09T19:04:06.940960380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:06.946232 env[1623]: time="2024-02-09T19:04:06.946187679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:06.948816 env[1623]: time="2024-02-09T19:04:06.948773609Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:06.949438 env[1623]: time="2024-02-09T19:04:06.949401343Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:04:06.951685 env[1623]: time="2024-02-09T19:04:06.951639925Z" level=info msg="CreateContainer within sandbox \"7c88c47ca4e97bd5829abed9364afaae3d5c4dd8030eb8a257296d655acaa2e9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:04:06.993214 env[1623]: time="2024-02-09T19:04:06.993156050Z" level=info msg="CreateContainer within sandbox \"7c88c47ca4e97bd5829abed9364afaae3d5c4dd8030eb8a257296d655acaa2e9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"82ef6d26c0bc2037f7037fbf047d91777a60bf306652c4cb3f065c78142ab74b\"" Feb 9 19:04:06.994444 env[1623]: time="2024-02-09T19:04:06.994406023Z" level=info msg="StartContainer for \"82ef6d26c0bc2037f7037fbf047d91777a60bf306652c4cb3f065c78142ab74b\"" Feb 9 19:04:07.036322 systemd[1]: Started cri-containerd-82ef6d26c0bc2037f7037fbf047d91777a60bf306652c4cb3f065c78142ab74b.scope. Feb 9 19:04:07.087412 env[1623]: time="2024-02-09T19:04:07.087355449Z" level=info msg="StartContainer for \"82ef6d26c0bc2037f7037fbf047d91777a60bf306652c4cb3f065c78142ab74b\" returns successfully" Feb 9 19:04:07.370412 kubelet[2051]: E0209 19:04:07.370373 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:07.871218 kubelet[2051]: I0209 19:04:07.871185 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-wb7fc" podStartSLOduration=-9.22337202298365e+09 pod.CreationTimestamp="2024-02-09 19:03:54 +0000 UTC" firstStartedPulling="2024-02-09 19:03:59.253014784 +0000 UTC m=+64.468586097" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:07.867898452 +0000 UTC m=+73.083469755" watchObservedRunningTime="2024-02-09 19:04:07.871124853 +0000 UTC m=+73.086696173" Feb 9 19:04:07.963557 systemd[1]: run-containerd-runc-k8s.io-82ef6d26c0bc2037f7037fbf047d91777a60bf306652c4cb3f065c78142ab74b-runc.8B5xAW.mount: Deactivated successfully. Feb 9 19:04:08.371294 kubelet[2051]: E0209 19:04:08.371241 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:09.371759 kubelet[2051]: E0209 19:04:09.371705 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:10.372214 kubelet[2051]: E0209 19:04:10.372165 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:11.373091 kubelet[2051]: E0209 19:04:11.373051 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:12.374659 kubelet[2051]: E0209 19:04:12.374604 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:13.375820 kubelet[2051]: E0209 19:04:13.375763 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:13.738955 kubelet[2051]: I0209 19:04:13.738824 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:13.746215 systemd[1]: Created slice kubepods-besteffort-podad2141b7_0b10_48a5_8924_803c55d8a78a.slice. Feb 9 19:04:13.829914 kubelet[2051]: I0209 19:04:13.829874 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ad2141b7-0b10-48a5-8924-803c55d8a78a-data\") pod \"nfs-server-provisioner-0\" (UID: \"ad2141b7-0b10-48a5-8924-803c55d8a78a\") " pod="default/nfs-server-provisioner-0" Feb 9 19:04:13.830097 kubelet[2051]: I0209 19:04:13.829993 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh86d\" (UniqueName: \"kubernetes.io/projected/ad2141b7-0b10-48a5-8924-803c55d8a78a-kube-api-access-mh86d\") pod \"nfs-server-provisioner-0\" (UID: \"ad2141b7-0b10-48a5-8924-803c55d8a78a\") " pod="default/nfs-server-provisioner-0" Feb 9 19:04:14.050980 env[1623]: time="2024-02-09T19:04:14.050922558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ad2141b7-0b10-48a5-8924-803c55d8a78a,Namespace:default,Attempt:0,}" Feb 9 19:04:14.114110 (udev-worker)[3435]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:14.115031 systemd-networkd[1448]: lxc15cd89ffb2fe: Link UP Feb 9 19:04:14.126690 kernel: eth0: renamed from tmpd2ca8 Feb 9 19:04:14.124479 (udev-worker)[3434]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:14.133219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:04:14.133327 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc15cd89ffb2fe: link becomes ready Feb 9 19:04:14.133551 systemd-networkd[1448]: lxc15cd89ffb2fe: Gained carrier Feb 9 19:04:14.376550 kubelet[2051]: E0209 19:04:14.376367 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:14.430718 env[1623]: time="2024-02-09T19:04:14.430607533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:14.431140 env[1623]: time="2024-02-09T19:04:14.431095389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:14.431303 env[1623]: time="2024-02-09T19:04:14.431245002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:14.431628 env[1623]: time="2024-02-09T19:04:14.431577388Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2ca898ab83c81482bb210cf5d87dd5bbff6c1a73c14947f1fd444eb8b23bd08 pid=3489 runtime=io.containerd.runc.v2 Feb 9 19:04:14.452473 systemd[1]: run-containerd-runc-k8s.io-d2ca898ab83c81482bb210cf5d87dd5bbff6c1a73c14947f1fd444eb8b23bd08-runc.FcI1wr.mount: Deactivated successfully. Feb 9 19:04:14.457705 systemd[1]: Started cri-containerd-d2ca898ab83c81482bb210cf5d87dd5bbff6c1a73c14947f1fd444eb8b23bd08.scope. Feb 9 19:04:14.512698 env[1623]: time="2024-02-09T19:04:14.512637133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ad2141b7-0b10-48a5-8924-803c55d8a78a,Namespace:default,Attempt:0,} returns sandbox id \"d2ca898ab83c81482bb210cf5d87dd5bbff6c1a73c14947f1fd444eb8b23bd08\"" Feb 9 19:04:14.514923 env[1623]: time="2024-02-09T19:04:14.514852059Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:04:15.308654 kubelet[2051]: E0209 19:04:15.308462 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:15.376611 kubelet[2051]: E0209 19:04:15.376545 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:15.537980 systemd-networkd[1448]: lxc15cd89ffb2fe: Gained IPv6LL Feb 9 19:04:16.379174 kubelet[2051]: E0209 19:04:16.378860 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:17.379130 kubelet[2051]: E0209 19:04:17.379053 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:17.545895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545843322.mount: Deactivated successfully. Feb 9 19:04:18.379757 kubelet[2051]: E0209 19:04:18.379717 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:19.380192 kubelet[2051]: E0209 19:04:19.380155 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:20.380377 kubelet[2051]: E0209 19:04:20.380338 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:20.616392 env[1623]: time="2024-02-09T19:04:20.616333214Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.623372 env[1623]: time="2024-02-09T19:04:20.623325022Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.630066 env[1623]: time="2024-02-09T19:04:20.630020826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.646841 env[1623]: time="2024-02-09T19:04:20.646039658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:20.646993 env[1623]: time="2024-02-09T19:04:20.646904046Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:04:20.650205 env[1623]: time="2024-02-09T19:04:20.650162551Z" level=info msg="CreateContainer within sandbox \"d2ca898ab83c81482bb210cf5d87dd5bbff6c1a73c14947f1fd444eb8b23bd08\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:04:20.673747 env[1623]: time="2024-02-09T19:04:20.673692155Z" level=info msg="CreateContainer within sandbox \"d2ca898ab83c81482bb210cf5d87dd5bbff6c1a73c14947f1fd444eb8b23bd08\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"97da871ab997979ce3a87f2d82fb322ec79669e9ee06e57a1fe27833aa545b39\"" Feb 9 19:04:20.674272 env[1623]: time="2024-02-09T19:04:20.674212500Z" level=info msg="StartContainer for \"97da871ab997979ce3a87f2d82fb322ec79669e9ee06e57a1fe27833aa545b39\"" Feb 9 19:04:20.707317 systemd[1]: Started cri-containerd-97da871ab997979ce3a87f2d82fb322ec79669e9ee06e57a1fe27833aa545b39.scope. Feb 9 19:04:20.766117 env[1623]: time="2024-02-09T19:04:20.766062335Z" level=info msg="StartContainer for \"97da871ab997979ce3a87f2d82fb322ec79669e9ee06e57a1fe27833aa545b39\" returns successfully" Feb 9 19:04:20.965386 kubelet[2051]: I0209 19:04:20.965264 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.22337202888955e+09 pod.CreationTimestamp="2024-02-09 19:04:13 +0000 UTC" firstStartedPulling="2024-02-09 19:04:14.514199973 +0000 UTC m=+79.729771290" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:20.964551968 +0000 UTC m=+86.180123291" watchObservedRunningTime="2024-02-09 19:04:20.965226912 +0000 UTC m=+86.180798235" Feb 9 19:04:21.381358 kubelet[2051]: E0209 19:04:21.381308 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:22.382412 kubelet[2051]: E0209 19:04:22.382370 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:23.382802 kubelet[2051]: E0209 19:04:23.382750 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:24.383722 kubelet[2051]: E0209 19:04:24.383665 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:25.383891 kubelet[2051]: E0209 19:04:25.383846 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:26.384008 kubelet[2051]: E0209 19:04:26.383952 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:27.384770 kubelet[2051]: E0209 19:04:27.384730 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:28.385038 kubelet[2051]: E0209 19:04:28.384873 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:29.386158 kubelet[2051]: E0209 19:04:29.386124 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:30.386900 kubelet[2051]: E0209 19:04:30.386848 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:30.611006 kubelet[2051]: I0209 19:04:30.610963 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:30.616756 systemd[1]: Created slice kubepods-besteffort-pod32965344_026d_49d5_911f_a397e6e3758e.slice. Feb 9 19:04:30.751896 kubelet[2051]: I0209 19:04:30.751521 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e9c9616d-cb29-4b97-bf53-c277b9b3ed16\" (UniqueName: \"kubernetes.io/nfs/32965344-026d-49d5-911f-a397e6e3758e-pvc-e9c9616d-cb29-4b97-bf53-c277b9b3ed16\") pod \"test-pod-1\" (UID: \"32965344-026d-49d5-911f-a397e6e3758e\") " pod="default/test-pod-1" Feb 9 19:04:30.751896 kubelet[2051]: I0209 19:04:30.751582 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkh64\" (UniqueName: \"kubernetes.io/projected/32965344-026d-49d5-911f-a397e6e3758e-kube-api-access-nkh64\") pod \"test-pod-1\" (UID: \"32965344-026d-49d5-911f-a397e6e3758e\") " pod="default/test-pod-1" Feb 9 19:04:30.922696 kernel: FS-Cache: Loaded Feb 9 19:04:30.982342 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:04:30.982519 kernel: RPC: Registered udp transport module. Feb 9 19:04:30.982724 kernel: RPC: Registered tcp transport module. Feb 9 19:04:30.985854 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:04:31.048710 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:04:31.302411 kernel: NFS: Registering the id_resolver key type Feb 9 19:04:31.302569 kernel: Key type id_resolver registered Feb 9 19:04:31.302608 kernel: Key type id_legacy registered Feb 9 19:04:31.354642 nfsidmap[3632]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 19:04:31.360225 nfsidmap[3633]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 19:04:31.387483 kubelet[2051]: E0209 19:04:31.387423 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:31.520943 env[1623]: time="2024-02-09T19:04:31.520893789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32965344-026d-49d5-911f-a397e6e3758e,Namespace:default,Attempt:0,}" Feb 9 19:04:31.569494 (udev-worker)[3620]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:31.570345 (udev-worker)[3626]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:31.571919 systemd-networkd[1448]: lxc2323c2bdcde5: Link UP Feb 9 19:04:31.577788 kernel: eth0: renamed from tmp27180 Feb 9 19:04:31.586352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:04:31.586486 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2323c2bdcde5: link becomes ready Feb 9 19:04:31.586962 systemd-networkd[1448]: lxc2323c2bdcde5: Gained carrier Feb 9 19:04:31.919246 env[1623]: time="2024-02-09T19:04:31.919067325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:31.919424 env[1623]: time="2024-02-09T19:04:31.919120485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:31.919424 env[1623]: time="2024-02-09T19:04:31.919136982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:31.920057 env[1623]: time="2024-02-09T19:04:31.919992446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27180ea50f4e64c991185855bb183715567e02b1d9f86c36b840c91593083f2e pid=3662 runtime=io.containerd.runc.v2 Feb 9 19:04:31.947079 systemd[1]: run-containerd-runc-k8s.io-27180ea50f4e64c991185855bb183715567e02b1d9f86c36b840c91593083f2e-runc.V3DXmg.mount: Deactivated successfully. Feb 9 19:04:31.957483 systemd[1]: Started cri-containerd-27180ea50f4e64c991185855bb183715567e02b1d9f86c36b840c91593083f2e.scope. Feb 9 19:04:32.033828 env[1623]: time="2024-02-09T19:04:32.033785147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:32965344-026d-49d5-911f-a397e6e3758e,Namespace:default,Attempt:0,} returns sandbox id \"27180ea50f4e64c991185855bb183715567e02b1d9f86c36b840c91593083f2e\"" Feb 9 19:04:32.036317 env[1623]: time="2024-02-09T19:04:32.036287544Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:04:32.355051 env[1623]: time="2024-02-09T19:04:32.354938560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.358268 env[1623]: time="2024-02-09T19:04:32.358229994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.361364 env[1623]: time="2024-02-09T19:04:32.361323850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.364125 env[1623]: time="2024-02-09T19:04:32.364090137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:32.365105 env[1623]: time="2024-02-09T19:04:32.365071418Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:04:32.368234 env[1623]: time="2024-02-09T19:04:32.367802709Z" level=info msg="CreateContainer within sandbox \"27180ea50f4e64c991185855bb183715567e02b1d9f86c36b840c91593083f2e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:04:32.387748 kubelet[2051]: E0209 19:04:32.387660 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:32.389414 env[1623]: time="2024-02-09T19:04:32.389339298Z" level=info msg="CreateContainer within sandbox \"27180ea50f4e64c991185855bb183715567e02b1d9f86c36b840c91593083f2e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"06a35b6ba0addcf4cf28782c863af65d4bd6affafd1ff28e5090f9a1ab6c500e\"" Feb 9 19:04:32.390349 env[1623]: time="2024-02-09T19:04:32.390306407Z" level=info msg="StartContainer for \"06a35b6ba0addcf4cf28782c863af65d4bd6affafd1ff28e5090f9a1ab6c500e\"" Feb 9 19:04:32.414548 systemd[1]: Started cri-containerd-06a35b6ba0addcf4cf28782c863af65d4bd6affafd1ff28e5090f9a1ab6c500e.scope. Feb 9 19:04:32.457208 env[1623]: time="2024-02-09T19:04:32.457161171Z" level=info msg="StartContainer for \"06a35b6ba0addcf4cf28782c863af65d4bd6affafd1ff28e5090f9a1ab6c500e\" returns successfully" Feb 9 19:04:32.817916 systemd-networkd[1448]: lxc2323c2bdcde5: Gained IPv6LL Feb 9 19:04:33.005811 kubelet[2051]: I0209 19:04:33.005773 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372017849043e+09 pod.CreationTimestamp="2024-02-09 19:04:14 +0000 UTC" firstStartedPulling="2024-02-09 19:04:32.035664665 +0000 UTC m=+97.251235965" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:33.005409903 +0000 UTC m=+98.220981208" watchObservedRunningTime="2024-02-09 19:04:33.005732205 +0000 UTC m=+98.221303541" Feb 9 19:04:33.388063 kubelet[2051]: E0209 19:04:33.388011 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:34.388765 kubelet[2051]: E0209 19:04:34.388727 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:35.309555 kubelet[2051]: E0209 19:04:35.309507 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:35.388950 kubelet[2051]: E0209 19:04:35.388906 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:36.389326 kubelet[2051]: E0209 19:04:36.389275 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:37.389777 kubelet[2051]: E0209 19:04:37.389737 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:38.390202 kubelet[2051]: E0209 19:04:38.390145 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:39.169862 env[1623]: time="2024-02-09T19:04:39.169789829Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:04:39.180279 env[1623]: time="2024-02-09T19:04:39.180227613Z" level=info msg="StopContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" with timeout 1 (s)" Feb 9 19:04:39.180566 env[1623]: time="2024-02-09T19:04:39.180524840Z" level=info msg="Stop container \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" with signal terminated" Feb 9 19:04:39.198731 systemd-networkd[1448]: lxc_health: Link DOWN Feb 9 19:04:39.198740 systemd-networkd[1448]: lxc_health: Lost carrier Feb 9 19:04:39.327129 systemd[1]: cri-containerd-136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e.scope: Deactivated successfully. Feb 9 19:04:39.327736 systemd[1]: cri-containerd-136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e.scope: Consumed 8.091s CPU time. Feb 9 19:04:39.362568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e-rootfs.mount: Deactivated successfully. Feb 9 19:04:39.391216 kubelet[2051]: E0209 19:04:39.391144 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:39.605041 env[1623]: time="2024-02-09T19:04:39.604969409Z" level=info msg="shim disconnected" id=136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e Feb 9 19:04:39.605041 env[1623]: time="2024-02-09T19:04:39.605038358Z" level=warning msg="cleaning up after shim disconnected" id=136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e namespace=k8s.io Feb 9 19:04:39.605426 env[1623]: time="2024-02-09T19:04:39.605051104Z" level=info msg="cleaning up dead shim" Feb 9 19:04:39.621592 env[1623]: time="2024-02-09T19:04:39.621494243Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3794 runtime=io.containerd.runc.v2\n" Feb 9 19:04:39.628002 env[1623]: time="2024-02-09T19:04:39.627934397Z" level=info msg="StopContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" returns successfully" Feb 9 19:04:39.629182 env[1623]: time="2024-02-09T19:04:39.629060308Z" level=info msg="StopPodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\"" Feb 9 19:04:39.629393 env[1623]: time="2024-02-09T19:04:39.629266760Z" level=info msg="Container to stop \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.629393 env[1623]: time="2024-02-09T19:04:39.629289039Z" level=info msg="Container to stop \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.629393 env[1623]: time="2024-02-09T19:04:39.629304078Z" level=info msg="Container to stop \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.629393 env[1623]: time="2024-02-09T19:04:39.629320016Z" level=info msg="Container to stop \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.629393 env[1623]: time="2024-02-09T19:04:39.629335560Z" level=info msg="Container to stop \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.631213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9-shm.mount: Deactivated successfully. Feb 9 19:04:39.643125 systemd[1]: cri-containerd-6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9.scope: Deactivated successfully. Feb 9 19:04:39.673783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9-rootfs.mount: Deactivated successfully. Feb 9 19:04:39.685516 env[1623]: time="2024-02-09T19:04:39.685462450Z" level=info msg="shim disconnected" id=6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9 Feb 9 19:04:39.685516 env[1623]: time="2024-02-09T19:04:39.685519713Z" level=warning msg="cleaning up after shim disconnected" id=6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9 namespace=k8s.io Feb 9 19:04:39.685899 env[1623]: time="2024-02-09T19:04:39.685532162Z" level=info msg="cleaning up dead shim" Feb 9 19:04:39.704775 env[1623]: time="2024-02-09T19:04:39.704723949Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3824 runtime=io.containerd.runc.v2\n" Feb 9 19:04:39.705423 env[1623]: time="2024-02-09T19:04:39.705385127Z" level=info msg="TearDown network for sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" successfully" Feb 9 19:04:39.705423 env[1623]: time="2024-02-09T19:04:39.705417205Z" level=info msg="StopPodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" returns successfully" Feb 9 19:04:39.810083 kubelet[2051]: I0209 19:04:39.810026 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g96v8\" (UniqueName: \"kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-kube-api-access-g96v8\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810083 kubelet[2051]: I0209 19:04:39.810086 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-lib-modules\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810114 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-etc-cni-netd\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810167 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cni-path\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810191 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-run\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810214 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-bpf-maps\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810239 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-cgroup\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810263 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-hostproc\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810286 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-xtables-lock\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810313 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-kernel\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810345 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-net\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.810501 kubelet[2051]: I0209 19:04:39.810486 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-hubble-tls\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.811198 kubelet[2051]: I0209 19:04:39.810526 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-config-path\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.811198 kubelet[2051]: I0209 19:04:39.810560 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46d72fcf-c8be-4e68-9f39-c8734b29680f-clustermesh-secrets\") pod \"46d72fcf-c8be-4e68-9f39-c8734b29680f\" (UID: \"46d72fcf-c8be-4e68-9f39-c8734b29680f\") " Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.811340 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.812425 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-hostproc" (OuterVolumeSpecName: "hostproc") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.812469 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.812497 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.812523 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: W0209 19:04:39.812866 2051 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/46d72fcf-c8be-4e68-9f39-c8734b29680f/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.814531 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.815986 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.816046 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cni-path" (OuterVolumeSpecName: "cni-path") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.816082 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.820342 kubelet[2051]: I0209 19:04:39.816174 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.833530 kubelet[2051]: I0209 19:04:39.821273 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:39.831956 systemd[1]: var-lib-kubelet-pods-46d72fcf\x2dc8be\x2d4e68\x2d9f39\x2dc8734b29680f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg96v8.mount: Deactivated successfully. Feb 9 19:04:39.837636 kubelet[2051]: I0209 19:04:39.837592 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d72fcf-c8be-4e68-9f39-c8734b29680f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:39.838133 kubelet[2051]: I0209 19:04:39.838099 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-kube-api-access-g96v8" (OuterVolumeSpecName: "kube-api-access-g96v8") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "kube-api-access-g96v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:39.839437 kubelet[2051]: I0209 19:04:39.839407 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "46d72fcf-c8be-4e68-9f39-c8734b29680f" (UID: "46d72fcf-c8be-4e68-9f39-c8734b29680f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.910884 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-config-path\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.910921 2051 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46d72fcf-c8be-4e68-9f39-c8734b29680f-clustermesh-secrets\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.910937 2051 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-g96v8\" (UniqueName: \"kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-kube-api-access-g96v8\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.910952 2051 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-lib-modules\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.910965 2051 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-etc-cni-netd\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.910978 2051 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cni-path\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911143 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-run\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911160 2051 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-bpf-maps\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911192 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-cilium-cgroup\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911205 2051 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-hostproc\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911219 2051 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-xtables-lock\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911232 2051 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-kernel\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911248 2051 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46d72fcf-c8be-4e68-9f39-c8734b29680f-host-proc-sys-net\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:39.911311 kubelet[2051]: I0209 19:04:39.911262 2051 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46d72fcf-c8be-4e68-9f39-c8734b29680f-hubble-tls\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:40.000909 kubelet[2051]: I0209 19:04:40.000883 2051 scope.go:115] "RemoveContainer" containerID="136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e" Feb 9 19:04:40.007390 env[1623]: time="2024-02-09T19:04:40.005724710Z" level=info msg="RemoveContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\"" Feb 9 19:04:40.006579 systemd[1]: Removed slice kubepods-burstable-pod46d72fcf_c8be_4e68_9f39_c8734b29680f.slice. Feb 9 19:04:40.006684 systemd[1]: kubepods-burstable-pod46d72fcf_c8be_4e68_9f39_c8734b29680f.slice: Consumed 8.204s CPU time. Feb 9 19:04:40.011918 env[1623]: time="2024-02-09T19:04:40.011866948Z" level=info msg="RemoveContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" returns successfully" Feb 9 19:04:40.012251 kubelet[2051]: I0209 19:04:40.012222 2051 scope.go:115] "RemoveContainer" containerID="64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f" Feb 9 19:04:40.020411 env[1623]: time="2024-02-09T19:04:40.020367208Z" level=info msg="RemoveContainer for \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\"" Feb 9 19:04:40.026094 env[1623]: time="2024-02-09T19:04:40.026042349Z" level=info msg="RemoveContainer for \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\" returns successfully" Feb 9 19:04:40.026333 kubelet[2051]: I0209 19:04:40.026304 2051 scope.go:115] "RemoveContainer" containerID="b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a" Feb 9 19:04:40.029165 env[1623]: time="2024-02-09T19:04:40.029127843Z" level=info msg="RemoveContainer for \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\"" Feb 9 19:04:40.039073 env[1623]: time="2024-02-09T19:04:40.038830244Z" level=info msg="RemoveContainer for \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\" returns successfully" Feb 9 19:04:40.039405 kubelet[2051]: I0209 19:04:40.039380 2051 scope.go:115] "RemoveContainer" containerID="f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798" Feb 9 19:04:40.043437 env[1623]: time="2024-02-09T19:04:40.043081411Z" level=info msg="RemoveContainer for \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\"" Feb 9 19:04:40.052194 env[1623]: time="2024-02-09T19:04:40.052147272Z" level=info msg="RemoveContainer for \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\" returns successfully" Feb 9 19:04:40.053874 kubelet[2051]: I0209 19:04:40.053840 2051 scope.go:115] "RemoveContainer" containerID="87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76" Feb 9 19:04:40.056979 env[1623]: time="2024-02-09T19:04:40.056937041Z" level=info msg="RemoveContainer for \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\"" Feb 9 19:04:40.061261 env[1623]: time="2024-02-09T19:04:40.061216597Z" level=info msg="RemoveContainer for \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\" returns successfully" Feb 9 19:04:40.061596 kubelet[2051]: I0209 19:04:40.061569 2051 scope.go:115] "RemoveContainer" containerID="136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e" Feb 9 19:04:40.062066 env[1623]: time="2024-02-09T19:04:40.061966055Z" level=error msg="ContainerStatus for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\": not found" Feb 9 19:04:40.062413 kubelet[2051]: E0209 19:04:40.062389 2051 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\": not found" containerID="136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e" Feb 9 19:04:40.062505 kubelet[2051]: I0209 19:04:40.062441 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e} err="failed to get container status \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\": not found" Feb 9 19:04:40.062505 kubelet[2051]: I0209 19:04:40.062458 2051 scope.go:115] "RemoveContainer" containerID="64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f" Feb 9 19:04:40.063298 env[1623]: time="2024-02-09T19:04:40.063165651Z" level=error msg="ContainerStatus for \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\": not found" Feb 9 19:04:40.063525 kubelet[2051]: E0209 19:04:40.063504 2051 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\": not found" containerID="64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f" Feb 9 19:04:40.063608 kubelet[2051]: I0209 19:04:40.063541 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f} err="failed to get container status \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\": rpc error: code = NotFound desc = an error occurred when try to find container \"64d65d0ee5a581ccc31012772e60ae3940d7dcc103ee83a538d45cf77a2d690f\": not found" Feb 9 19:04:40.063608 kubelet[2051]: I0209 19:04:40.063557 2051 scope.go:115] "RemoveContainer" containerID="b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a" Feb 9 19:04:40.063831 env[1623]: time="2024-02-09T19:04:40.063760408Z" level=error msg="ContainerStatus for \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\": not found" Feb 9 19:04:40.064031 kubelet[2051]: E0209 19:04:40.064010 2051 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\": not found" containerID="b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a" Feb 9 19:04:40.064111 kubelet[2051]: I0209 19:04:40.064044 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a} err="failed to get container status \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5ca6c4829b3402125552af904889879f5a2d9cbbea2be864ba05ef68f956d7a\": not found" Feb 9 19:04:40.064111 kubelet[2051]: I0209 19:04:40.064062 2051 scope.go:115] "RemoveContainer" containerID="f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798" Feb 9 19:04:40.064323 env[1623]: time="2024-02-09T19:04:40.064259416Z" level=error msg="ContainerStatus for \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\": not found" Feb 9 19:04:40.064486 kubelet[2051]: E0209 19:04:40.064467 2051 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\": not found" containerID="f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798" Feb 9 19:04:40.064574 kubelet[2051]: I0209 19:04:40.064500 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798} err="failed to get container status \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\": rpc error: code = NotFound desc = an error occurred when try to find container \"f789d3eed6bda3588b9c1a5b67d47901c6dc95c65bf7c490a26bd8de29a81798\": not found" Feb 9 19:04:40.064574 kubelet[2051]: I0209 19:04:40.064519 2051 scope.go:115] "RemoveContainer" containerID="87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76" Feb 9 19:04:40.064767 env[1623]: time="2024-02-09T19:04:40.064712341Z" level=error msg="ContainerStatus for \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\": not found" Feb 9 19:04:40.064877 kubelet[2051]: E0209 19:04:40.064862 2051 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\": not found" containerID="87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76" Feb 9 19:04:40.065208 kubelet[2051]: I0209 19:04:40.064895 2051 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76} err="failed to get container status \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\": rpc error: code = NotFound desc = an error occurred when try to find container \"87afa72bee86c3deaaf3d26e1c6b06252b6a0ca18e64fac6b76ab827bc9b3a76\": not found" Feb 9 19:04:40.137014 systemd[1]: var-lib-kubelet-pods-46d72fcf\x2dc8be\x2d4e68\x2d9f39\x2dc8734b29680f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:04:40.137149 systemd[1]: var-lib-kubelet-pods-46d72fcf\x2dc8be\x2d4e68\x2d9f39\x2dc8734b29680f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:04:40.391870 kubelet[2051]: E0209 19:04:40.391820 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:40.464424 kubelet[2051]: E0209 19:04:40.464382 2051 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:04:41.392151 kubelet[2051]: E0209 19:04:41.392098 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:41.619837 env[1623]: time="2024-02-09T19:04:41.619654150Z" level=info msg="StopContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" with timeout 1 (s)" Feb 9 19:04:41.619837 env[1623]: time="2024-02-09T19:04:41.619728171Z" level=error msg="StopContainer for \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\": not found" Feb 9 19:04:41.620714 kubelet[2051]: E0209 19:04:41.620104 2051 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e\": not found" containerID="136f8ed42c931ca85052d1f2e820d09385111a8816315df98480c076b3b23c4e" Feb 9 19:04:41.621284 kubelet[2051]: I0209 19:04:41.621012 2051 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=46d72fcf-c8be-4e68-9f39-c8734b29680f path="/var/lib/kubelet/pods/46d72fcf-c8be-4e68-9f39-c8734b29680f/volumes" Feb 9 19:04:41.621384 env[1623]: time="2024-02-09T19:04:41.621006870Z" level=info msg="StopPodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\"" Feb 9 19:04:41.621384 env[1623]: time="2024-02-09T19:04:41.621102343Z" level=info msg="TearDown network for sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" successfully" Feb 9 19:04:41.621384 env[1623]: time="2024-02-09T19:04:41.621211236Z" level=info msg="StopPodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" returns successfully" Feb 9 19:04:42.063021 kubelet[2051]: I0209 19:04:42.062968 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:42.063021 kubelet[2051]: E0209 19:04:42.063032 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46d72fcf-c8be-4e68-9f39-c8734b29680f" containerName="clean-cilium-state" Feb 9 19:04:42.063302 kubelet[2051]: E0209 19:04:42.063046 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46d72fcf-c8be-4e68-9f39-c8734b29680f" containerName="cilium-agent" Feb 9 19:04:42.063302 kubelet[2051]: E0209 19:04:42.063055 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46d72fcf-c8be-4e68-9f39-c8734b29680f" containerName="mount-cgroup" Feb 9 19:04:42.063302 kubelet[2051]: E0209 19:04:42.063064 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46d72fcf-c8be-4e68-9f39-c8734b29680f" containerName="apply-sysctl-overwrites" Feb 9 19:04:42.063302 kubelet[2051]: E0209 19:04:42.063073 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46d72fcf-c8be-4e68-9f39-c8734b29680f" containerName="mount-bpf-fs" Feb 9 19:04:42.063302 kubelet[2051]: I0209 19:04:42.063097 2051 memory_manager.go:346] "RemoveStaleState removing state" podUID="46d72fcf-c8be-4e68-9f39-c8734b29680f" containerName="cilium-agent" Feb 9 19:04:42.074398 kubelet[2051]: W0209 19:04:42.074369 2051 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.23.81" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.81' and this object Feb 9 19:04:42.074650 kubelet[2051]: E0209 19:04:42.074633 2051 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.23.81" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.81' and this object Feb 9 19:04:42.075303 systemd[1]: Created slice kubepods-besteffort-pod6e28ac21_b81e_44bc_bc17_cbd4ac3befe1.slice. Feb 9 19:04:42.113547 kubelet[2051]: I0209 19:04:42.113507 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:42.120239 systemd[1]: Created slice kubepods-burstable-pod21601c11_c239_404e_9201_d910334c7e83.slice. Feb 9 19:04:42.122710 kubelet[2051]: I0209 19:04:42.122549 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8zdh\" (UniqueName: \"kubernetes.io/projected/6e28ac21-b81e-44bc-bc17-cbd4ac3befe1-kube-api-access-v8zdh\") pod \"cilium-operator-f59cbd8c6-kkgsr\" (UID: \"6e28ac21-b81e-44bc-bc17-cbd4ac3befe1\") " pod="kube-system/cilium-operator-f59cbd8c6-kkgsr" Feb 9 19:04:42.122710 kubelet[2051]: I0209 19:04:42.122600 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e28ac21-b81e-44bc-bc17-cbd4ac3befe1-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-kkgsr\" (UID: \"6e28ac21-b81e-44bc-bc17-cbd4ac3befe1\") " pod="kube-system/cilium-operator-f59cbd8c6-kkgsr" Feb 9 19:04:42.223617 kubelet[2051]: I0209 19:04:42.223576 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-clustermesh-secrets\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.223802 kubelet[2051]: I0209 19:04:42.223774 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-bpf-maps\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.223933 kubelet[2051]: I0209 19:04:42.223920 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-xtables-lock\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224060 kubelet[2051]: I0209 19:04:42.224049 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-cilium-ipsec-secrets\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224312 kubelet[2051]: I0209 19:04:42.224289 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-lib-modules\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224405 kubelet[2051]: I0209 19:04:42.224347 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-run\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224405 kubelet[2051]: I0209 19:04:42.224377 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-hostproc\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224506 kubelet[2051]: I0209 19:04:42.224431 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-etc-cni-netd\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224506 kubelet[2051]: I0209 19:04:42.224482 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-net\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224605 kubelet[2051]: I0209 19:04:42.224517 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-kernel\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224605 kubelet[2051]: I0209 19:04:42.224566 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-cgroup\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224605 kubelet[2051]: I0209 19:04:42.224601 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cni-path\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224773 kubelet[2051]: I0209 19:04:42.224646 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21601c11-c239-404e-9201-d910334c7e83-cilium-config-path\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224773 kubelet[2051]: I0209 19:04:42.224739 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-hubble-tls\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.224879 kubelet[2051]: I0209 19:04:42.224794 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2f2\" (UniqueName: \"kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-kube-api-access-bq2f2\") pod \"cilium-g67dk\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " pod="kube-system/cilium-g67dk" Feb 9 19:04:42.393282 kubelet[2051]: E0209 19:04:42.393166 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:43.225823 kubelet[2051]: E0209 19:04:43.225776 2051 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:04:43.226049 kubelet[2051]: E0209 19:04:43.226015 2051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e28ac21-b81e-44bc-bc17-cbd4ac3befe1-cilium-config-path podName:6e28ac21-b81e-44bc-bc17-cbd4ac3befe1 nodeName:}" failed. No retries permitted until 2024-02-09 19:04:43.72588117 +0000 UTC m=+108.941452491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/6e28ac21-b81e-44bc-bc17-cbd4ac3befe1-cilium-config-path") pod "cilium-operator-f59cbd8c6-kkgsr" (UID: "6e28ac21-b81e-44bc-bc17-cbd4ac3befe1") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:04:43.326280 kubelet[2051]: E0209 19:04:43.326241 2051 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:04:43.326469 kubelet[2051]: E0209 19:04:43.326341 2051 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21601c11-c239-404e-9201-d910334c7e83-cilium-config-path podName:21601c11-c239-404e-9201-d910334c7e83 nodeName:}" failed. No retries permitted until 2024-02-09 19:04:43.826319285 +0000 UTC m=+109.041890600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/21601c11-c239-404e-9201-d910334c7e83-cilium-config-path") pod "cilium-g67dk" (UID: "21601c11-c239-404e-9201-d910334c7e83") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:04:43.394028 kubelet[2051]: E0209 19:04:43.393992 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:43.889886 env[1623]: time="2024-02-09T19:04:43.889746317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-kkgsr,Uid:6e28ac21-b81e-44bc-bc17-cbd4ac3befe1,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:43.913067 env[1623]: time="2024-02-09T19:04:43.912955961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:43.913235 env[1623]: time="2024-02-09T19:04:43.913091504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:43.913235 env[1623]: time="2024-02-09T19:04:43.913121720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:43.913640 env[1623]: time="2024-02-09T19:04:43.913582925Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c15616914a12f324892f3c027176ece8c395e35514af85d0638bb9877360c5ee pid=3854 runtime=io.containerd.runc.v2 Feb 9 19:04:43.928813 env[1623]: time="2024-02-09T19:04:43.928771547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g67dk,Uid:21601c11-c239-404e-9201-d910334c7e83,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:43.938531 systemd[1]: run-containerd-runc-k8s.io-c15616914a12f324892f3c027176ece8c395e35514af85d0638bb9877360c5ee-runc.uDkQjV.mount: Deactivated successfully. Feb 9 19:04:43.953811 systemd[1]: Started cri-containerd-c15616914a12f324892f3c027176ece8c395e35514af85d0638bb9877360c5ee.scope. Feb 9 19:04:43.983896 env[1623]: time="2024-02-09T19:04:43.983696303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:43.983896 env[1623]: time="2024-02-09T19:04:43.983739550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:43.983896 env[1623]: time="2024-02-09T19:04:43.983756979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:43.987690 env[1623]: time="2024-02-09T19:04:43.984253677Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e pid=3889 runtime=io.containerd.runc.v2 Feb 9 19:04:44.006196 systemd[1]: Started cri-containerd-ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e.scope. Feb 9 19:04:44.046872 env[1623]: time="2024-02-09T19:04:44.046815312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-kkgsr,Uid:6e28ac21-b81e-44bc-bc17-cbd4ac3befe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c15616914a12f324892f3c027176ece8c395e35514af85d0638bb9877360c5ee\"" Feb 9 19:04:44.048855 env[1623]: time="2024-02-09T19:04:44.048814586Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:04:44.063145 env[1623]: time="2024-02-09T19:04:44.063095778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g67dk,Uid:21601c11-c239-404e-9201-d910334c7e83,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\"" Feb 9 19:04:44.068161 env[1623]: time="2024-02-09T19:04:44.068117627Z" level=info msg="CreateContainer within sandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:44.086934 env[1623]: time="2024-02-09T19:04:44.086880363Z" level=info msg="CreateContainer within sandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\"" Feb 9 19:04:44.087499 env[1623]: time="2024-02-09T19:04:44.087468016Z" level=info msg="StartContainer for \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\"" Feb 9 19:04:44.106109 systemd[1]: Started cri-containerd-e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc.scope. Feb 9 19:04:44.119084 systemd[1]: cri-containerd-e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc.scope: Deactivated successfully. Feb 9 19:04:44.150777 env[1623]: time="2024-02-09T19:04:44.149660418Z" level=info msg="shim disconnected" id=e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc Feb 9 19:04:44.150777 env[1623]: time="2024-02-09T19:04:44.149733092Z" level=warning msg="cleaning up after shim disconnected" id=e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc namespace=k8s.io Feb 9 19:04:44.150777 env[1623]: time="2024-02-09T19:04:44.149744977Z" level=info msg="cleaning up dead shim" Feb 9 19:04:44.160301 env[1623]: time="2024-02-09T19:04:44.160247242Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3953 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:04:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:04:44.160694 env[1623]: time="2024-02-09T19:04:44.160521787Z" level=error msg="copy shim log" error="read /proc/self/fd/86: file already closed" Feb 9 19:04:44.160935 env[1623]: time="2024-02-09T19:04:44.160888679Z" level=error msg="Failed to pipe stderr of container \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\"" error="reading from a closed fifo" Feb 9 19:04:44.165827 env[1623]: time="2024-02-09T19:04:44.165747084Z" level=error msg="Failed to pipe stdout of container \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\"" error="reading from a closed fifo" Feb 9 19:04:44.168194 env[1623]: time="2024-02-09T19:04:44.168133287Z" level=error msg="StartContainer for \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:04:44.168492 kubelet[2051]: E0209 19:04:44.168468 2051 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc" Feb 9 19:04:44.169050 kubelet[2051]: E0209 19:04:44.168865 2051 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:04:44.169050 kubelet[2051]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:04:44.169050 kubelet[2051]: rm /hostbin/cilium-mount Feb 9 19:04:44.169050 kubelet[2051]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-bq2f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-g67dk_kube-system(21601c11-c239-404e-9201-d910334c7e83): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:04:44.169050 kubelet[2051]: E0209 19:04:44.168934 2051 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g67dk" podUID=21601c11-c239-404e-9201-d910334c7e83 Feb 9 19:04:44.394202 kubelet[2051]: E0209 19:04:44.394162 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:45.024901 env[1623]: time="2024-02-09T19:04:45.024860161Z" level=info msg="StopPodSandbox for \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\"" Feb 9 19:04:45.038019 env[1623]: time="2024-02-09T19:04:45.024923456Z" level=info msg="Container to stop \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:45.033693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e-shm.mount: Deactivated successfully. Feb 9 19:04:45.055424 systemd[1]: cri-containerd-ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e.scope: Deactivated successfully. Feb 9 19:04:45.091225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e-rootfs.mount: Deactivated successfully. Feb 9 19:04:45.110358 env[1623]: time="2024-02-09T19:04:45.110300258Z" level=info msg="shim disconnected" id=ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e Feb 9 19:04:45.110358 env[1623]: time="2024-02-09T19:04:45.110352308Z" level=warning msg="cleaning up after shim disconnected" id=ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e namespace=k8s.io Feb 9 19:04:45.110651 env[1623]: time="2024-02-09T19:04:45.110365928Z" level=info msg="cleaning up dead shim" Feb 9 19:04:45.136388 env[1623]: time="2024-02-09T19:04:45.136341394Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3986 runtime=io.containerd.runc.v2\n" Feb 9 19:04:45.136929 env[1623]: time="2024-02-09T19:04:45.136898622Z" level=info msg="TearDown network for sandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" successfully" Feb 9 19:04:45.137055 env[1623]: time="2024-02-09T19:04:45.137035062Z" level=info msg="StopPodSandbox for \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" returns successfully" Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.244246 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-xtables-lock\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.244312 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-hostproc\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.244634 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq2f2\" (UniqueName: \"kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-kube-api-access-bq2f2\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.244694 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-cgroup\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.244733 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-run\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.244841 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-etc-cni-netd\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245024 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-net\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245083 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-cilium-ipsec-secrets\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245295 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-bpf-maps\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245644 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-lib-modules\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245789 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-hubble-tls\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245897 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-clustermesh-secrets\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.245941 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21601c11-c239-404e-9201-d910334c7e83-cilium-config-path\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.246097 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cni-path\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.246153 2051 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-kernel\") pod \"21601c11-c239-404e-9201-d910334c7e83\" (UID: \"21601c11-c239-404e-9201-d910334c7e83\") " Feb 9 19:04:45.247795 kubelet[2051]: I0209 19:04:45.246262 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.246316 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.246342 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-hostproc" (OuterVolumeSpecName: "hostproc") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.246962 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.247087 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.247237 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.247277 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.248976 kubelet[2051]: I0209 19:04:45.247305 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.251153 kubelet[2051]: I0209 19:04:45.249827 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.252945 kubelet[2051]: W0209 19:04:45.250555 2051 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/21601c11-c239-404e-9201-d910334c7e83/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:04:45.254518 kubelet[2051]: I0209 19:04:45.254483 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cni-path" (OuterVolumeSpecName: "cni-path") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:45.256881 kubelet[2051]: I0209 19:04:45.256697 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21601c11-c239-404e-9201-d910334c7e83-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:45.265940 systemd[1]: var-lib-kubelet-pods-21601c11\x2dc239\x2d404e\x2d9201\x2dd910334c7e83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbq2f2.mount: Deactivated successfully. Feb 9 19:04:45.276335 systemd[1]: var-lib-kubelet-pods-21601c11\x2dc239\x2d404e\x2d9201\x2dd910334c7e83-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:04:45.283255 kubelet[2051]: I0209 19:04:45.283209 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:45.283374 kubelet[2051]: I0209 19:04:45.283334 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-kube-api-access-bq2f2" (OuterVolumeSpecName: "kube-api-access-bq2f2") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "kube-api-access-bq2f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:45.291860 kubelet[2051]: I0209 19:04:45.291787 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:45.296090 kubelet[2051]: I0209 19:04:45.296041 2051 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "21601c11-c239-404e-9201-d910334c7e83" (UID: "21601c11-c239-404e-9201-d910334c7e83"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:45.347046 kubelet[2051]: I0209 19:04:45.347013 2051 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-xtables-lock\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347046 kubelet[2051]: I0209 19:04:45.347051 2051 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-hostproc\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347067 2051 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-bq2f2\" (UniqueName: \"kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-kube-api-access-bq2f2\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347081 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-cgroup\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347093 2051 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-etc-cni-netd\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347106 2051 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-net\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347118 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-cilium-ipsec-secrets\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347131 2051 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-bpf-maps\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347143 2051 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-lib-modules\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347154 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cilium-run\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347167 2051 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21601c11-c239-404e-9201-d910334c7e83-hubble-tls\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347203 2051 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21601c11-c239-404e-9201-d910334c7e83-clustermesh-secrets\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347216 2051 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21601c11-c239-404e-9201-d910334c7e83-cilium-config-path\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347229 2051 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-cni-path\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.347279 kubelet[2051]: I0209 19:04:45.347245 2051 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21601c11-c239-404e-9201-d910334c7e83-host-proc-sys-kernel\") on node \"172.31.23.81\" DevicePath \"\"" Feb 9 19:04:45.395383 kubelet[2051]: E0209 19:04:45.395348 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:45.468027 kubelet[2051]: E0209 19:04:45.467958 2051 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:04:45.632825 systemd[1]: Removed slice kubepods-burstable-pod21601c11_c239_404e_9201_d910334c7e83.slice. Feb 9 19:04:45.709201 kubelet[2051]: I0209 19:04:45.709169 2051 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:45.709465 kubelet[2051]: E0209 19:04:45.709452 2051 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21601c11-c239-404e-9201-d910334c7e83" containerName="mount-cgroup" Feb 9 19:04:45.709586 kubelet[2051]: I0209 19:04:45.709575 2051 memory_manager.go:346] "RemoveStaleState removing state" podUID="21601c11-c239-404e-9201-d910334c7e83" containerName="mount-cgroup" Feb 9 19:04:45.724720 systemd[1]: Created slice kubepods-burstable-pod76e7754c_9316_489d_a97f_02982d71b180.slice. Feb 9 19:04:45.750707 kubelet[2051]: I0209 19:04:45.750658 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-cilium-run\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.750877 kubelet[2051]: I0209 19:04:45.750729 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-bpf-maps\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.750877 kubelet[2051]: I0209 19:04:45.750771 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-cilium-cgroup\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.750877 kubelet[2051]: I0209 19:04:45.750796 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-etc-cni-netd\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.750877 kubelet[2051]: I0209 19:04:45.750843 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/76e7754c-9316-489d-a97f-02982d71b180-kube-api-access-z2vmp\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.750877 kubelet[2051]: I0209 19:04:45.750876 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-hostproc\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751118 kubelet[2051]: I0209 19:04:45.750999 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-cni-path\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751238 kubelet[2051]: I0209 19:04:45.751175 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-lib-modules\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751382 kubelet[2051]: I0209 19:04:45.751365 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/76e7754c-9316-489d-a97f-02982d71b180-cilium-ipsec-secrets\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751443 kubelet[2051]: I0209 19:04:45.751433 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-host-proc-sys-net\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751518 kubelet[2051]: I0209 19:04:45.751507 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76e7754c-9316-489d-a97f-02982d71b180-clustermesh-secrets\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751596 kubelet[2051]: I0209 19:04:45.751586 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-xtables-lock\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751662 kubelet[2051]: I0209 19:04:45.751652 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76e7754c-9316-489d-a97f-02982d71b180-cilium-config-path\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751752 kubelet[2051]: I0209 19:04:45.751741 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76e7754c-9316-489d-a97f-02982d71b180-host-proc-sys-kernel\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.751886 kubelet[2051]: I0209 19:04:45.751874 2051 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76e7754c-9316-489d-a97f-02982d71b180-hubble-tls\") pod \"cilium-cmmdx\" (UID: \"76e7754c-9316-489d-a97f-02982d71b180\") " pod="kube-system/cilium-cmmdx" Feb 9 19:04:45.923920 systemd[1]: var-lib-kubelet-pods-21601c11\x2dc239\x2d404e\x2d9201\x2dd910334c7e83-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:04:45.924294 systemd[1]: var-lib-kubelet-pods-21601c11\x2dc239\x2d404e\x2d9201\x2dd910334c7e83-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:04:46.037051 env[1623]: time="2024-02-09T19:04:46.037005268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmmdx,Uid:76e7754c-9316-489d-a97f-02982d71b180,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:46.066641 kubelet[2051]: I0209 19:04:46.066327 2051 scope.go:115] "RemoveContainer" containerID="e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc" Feb 9 19:04:46.070846 env[1623]: time="2024-02-09T19:04:46.070801483Z" level=info msg="RemoveContainer for \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\"" Feb 9 19:04:46.079063 env[1623]: time="2024-02-09T19:04:46.078934416Z" level=info msg="RemoveContainer for \"e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc\" returns successfully" Feb 9 19:04:46.106254 env[1623]: time="2024-02-09T19:04:46.094491178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:46.106254 env[1623]: time="2024-02-09T19:04:46.094632571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:46.106254 env[1623]: time="2024-02-09T19:04:46.094650866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:46.106254 env[1623]: time="2024-02-09T19:04:46.094888006Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef pid=4016 runtime=io.containerd.runc.v2 Feb 9 19:04:46.120128 systemd[1]: Started cri-containerd-d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef.scope. Feb 9 19:04:46.186303 env[1623]: time="2024-02-09T19:04:46.184914160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmmdx,Uid:76e7754c-9316-489d-a97f-02982d71b180,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\"" Feb 9 19:04:46.190388 env[1623]: time="2024-02-09T19:04:46.190345772Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:46.224565 env[1623]: time="2024-02-09T19:04:46.224524296Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34\"" Feb 9 19:04:46.226061 env[1623]: time="2024-02-09T19:04:46.225974488Z" level=info msg="StartContainer for \"81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34\"" Feb 9 19:04:46.273514 systemd[1]: Started cri-containerd-81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34.scope. Feb 9 19:04:46.354374 env[1623]: time="2024-02-09T19:04:46.354330839Z" level=info msg="StartContainer for \"81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34\" returns successfully" Feb 9 19:04:46.382411 systemd[1]: cri-containerd-81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34.scope: Deactivated successfully. Feb 9 19:04:46.399539 kubelet[2051]: E0209 19:04:46.399457 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:46.683135 env[1623]: time="2024-02-09T19:04:46.683076767Z" level=info msg="shim disconnected" id=81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34 Feb 9 19:04:46.683135 env[1623]: time="2024-02-09T19:04:46.683138418Z" level=warning msg="cleaning up after shim disconnected" id=81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34 namespace=k8s.io Feb 9 19:04:46.683569 env[1623]: time="2024-02-09T19:04:46.683150489Z" level=info msg="cleaning up dead shim" Feb 9 19:04:46.705763 env[1623]: time="2024-02-09T19:04:46.705628588Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" Feb 9 19:04:46.706153 env[1623]: time="2024-02-09T19:04:46.706122854Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:46.710328 env[1623]: time="2024-02-09T19:04:46.710279647Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:46.712719 env[1623]: time="2024-02-09T19:04:46.712680870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:46.713380 env[1623]: time="2024-02-09T19:04:46.713344995Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:04:46.715756 env[1623]: time="2024-02-09T19:04:46.715723419Z" level=info msg="CreateContainer within sandbox \"c15616914a12f324892f3c027176ece8c395e35514af85d0638bb9877360c5ee\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:04:46.736788 env[1623]: time="2024-02-09T19:04:46.736638114Z" level=info msg="CreateContainer within sandbox \"c15616914a12f324892f3c027176ece8c395e35514af85d0638bb9877360c5ee\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2646853b3cb8856de6c37fa5784f42a049936163080f74d0e9a6385a6328c7bd\"" Feb 9 19:04:46.739102 env[1623]: time="2024-02-09T19:04:46.738993068Z" level=info msg="StartContainer for \"2646853b3cb8856de6c37fa5784f42a049936163080f74d0e9a6385a6328c7bd\"" Feb 9 19:04:46.773863 systemd[1]: Started cri-containerd-2646853b3cb8856de6c37fa5784f42a049936163080f74d0e9a6385a6328c7bd.scope. Feb 9 19:04:46.845885 env[1623]: time="2024-02-09T19:04:46.845811990Z" level=info msg="StartContainer for \"2646853b3cb8856de6c37fa5784f42a049936163080f74d0e9a6385a6328c7bd\" returns successfully" Feb 9 19:04:47.076643 env[1623]: time="2024-02-09T19:04:47.076607634Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:04:47.097393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767270839.mount: Deactivated successfully. Feb 9 19:04:47.103306 env[1623]: time="2024-02-09T19:04:47.103133532Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164\"" Feb 9 19:04:47.104154 env[1623]: time="2024-02-09T19:04:47.104118794Z" level=info msg="StartContainer for \"1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164\"" Feb 9 19:04:47.133592 systemd[1]: Started cri-containerd-1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164.scope. Feb 9 19:04:47.205547 kubelet[2051]: I0209 19:04:47.205318 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-kkgsr" podStartSLOduration=-9.223372031649502e+09 pod.CreationTimestamp="2024-02-09 19:04:42 +0000 UTC" firstStartedPulling="2024-02-09 19:04:44.048262412 +0000 UTC m=+109.263833724" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:47.205180922 +0000 UTC m=+112.420752294" watchObservedRunningTime="2024-02-09 19:04:47.205273515 +0000 UTC m=+112.420844837" Feb 9 19:04:47.206560 env[1623]: time="2024-02-09T19:04:47.206505441Z" level=info msg="StartContainer for \"1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164\" returns successfully" Feb 9 19:04:47.222536 systemd[1]: cri-containerd-1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164.scope: Deactivated successfully. Feb 9 19:04:47.258595 env[1623]: time="2024-02-09T19:04:47.258541873Z" level=info msg="shim disconnected" id=1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164 Feb 9 19:04:47.259144 env[1623]: time="2024-02-09T19:04:47.259100279Z" level=warning msg="cleaning up after shim disconnected" id=1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164 namespace=k8s.io Feb 9 19:04:47.259312 env[1623]: time="2024-02-09T19:04:47.259292500Z" level=info msg="cleaning up dead shim" Feb 9 19:04:47.267091 kubelet[2051]: W0209 19:04:47.264271 2051 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21601c11_c239_404e_9201_d910334c7e83.slice/cri-containerd-e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc.scope WatchSource:0}: container "e8c42fbb591a40c43ed8d101b2c8730ca8c92cf68cf57814398c538224da3cdc" in namespace "k8s.io": not found Feb 9 19:04:47.281616 env[1623]: time="2024-02-09T19:04:47.281568441Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4206 runtime=io.containerd.runc.v2\n" Feb 9 19:04:47.400802 kubelet[2051]: E0209 19:04:47.399834 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:47.621872 kubelet[2051]: I0209 19:04:47.621656 2051 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=21601c11-c239-404e-9201-d910334c7e83 path="/var/lib/kubelet/pods/21601c11-c239-404e-9201-d910334c7e83/volumes" Feb 9 19:04:47.906035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164-rootfs.mount: Deactivated successfully. Feb 9 19:04:48.081255 env[1623]: time="2024-02-09T19:04:48.081207298Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:04:48.110742 env[1623]: time="2024-02-09T19:04:48.110691516Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e\"" Feb 9 19:04:48.111478 env[1623]: time="2024-02-09T19:04:48.111449953Z" level=info msg="StartContainer for \"5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e\"" Feb 9 19:04:48.150758 systemd[1]: Started cri-containerd-5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e.scope. Feb 9 19:04:48.199314 env[1623]: time="2024-02-09T19:04:48.199224159Z" level=info msg="StartContainer for \"5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e\" returns successfully" Feb 9 19:04:48.204385 systemd[1]: cri-containerd-5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e.scope: Deactivated successfully. Feb 9 19:04:48.245372 env[1623]: time="2024-02-09T19:04:48.245137865Z" level=info msg="shim disconnected" id=5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e Feb 9 19:04:48.245372 env[1623]: time="2024-02-09T19:04:48.245367193Z" level=warning msg="cleaning up after shim disconnected" id=5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e namespace=k8s.io Feb 9 19:04:48.245701 env[1623]: time="2024-02-09T19:04:48.245383871Z" level=info msg="cleaning up dead shim" Feb 9 19:04:48.256452 env[1623]: time="2024-02-09T19:04:48.256397371Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4264 runtime=io.containerd.runc.v2\n" Feb 9 19:04:48.400271 kubelet[2051]: E0209 19:04:48.400219 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:48.906296 systemd[1]: run-containerd-runc-k8s.io-5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e-runc.sFzLdO.mount: Deactivated successfully. Feb 9 19:04:48.906434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e-rootfs.mount: Deactivated successfully. Feb 9 19:04:49.086995 env[1623]: time="2024-02-09T19:04:49.086949089Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:04:49.114197 env[1623]: time="2024-02-09T19:04:49.114132672Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174\"" Feb 9 19:04:49.115137 env[1623]: time="2024-02-09T19:04:49.115105595Z" level=info msg="StartContainer for \"f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174\"" Feb 9 19:04:49.160407 systemd[1]: Started cri-containerd-f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174.scope. Feb 9 19:04:49.227475 systemd[1]: cri-containerd-f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174.scope: Deactivated successfully. Feb 9 19:04:49.230019 env[1623]: time="2024-02-09T19:04:49.229795719Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e7754c_9316_489d_a97f_02982d71b180.slice/cri-containerd-f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174.scope/memory.events\": no such file or directory" Feb 9 19:04:49.236466 env[1623]: time="2024-02-09T19:04:49.236410717Z" level=info msg="StartContainer for \"f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174\" returns successfully" Feb 9 19:04:49.277549 env[1623]: time="2024-02-09T19:04:49.277491100Z" level=info msg="shim disconnected" id=f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174 Feb 9 19:04:49.277549 env[1623]: time="2024-02-09T19:04:49.277547253Z" level=warning msg="cleaning up after shim disconnected" id=f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174 namespace=k8s.io Feb 9 19:04:49.277940 env[1623]: time="2024-02-09T19:04:49.277559713Z" level=info msg="cleaning up dead shim" Feb 9 19:04:49.290458 env[1623]: time="2024-02-09T19:04:49.290402396Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4323 runtime=io.containerd.runc.v2\n" Feb 9 19:04:49.400762 kubelet[2051]: E0209 19:04:49.400689 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:49.581087 kubelet[2051]: I0209 19:04:49.581055 2051 setters.go:548] "Node became not ready" node="172.31.23.81" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:04:49.580895115 +0000 UTC m=+114.796466430 LastTransitionTime:2024-02-09 19:04:49.580895115 +0000 UTC m=+114.796466430 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:04:49.906209 systemd[1]: run-containerd-runc-k8s.io-f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174-runc.a5YXp8.mount: Deactivated successfully. Feb 9 19:04:49.906343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174-rootfs.mount: Deactivated successfully. Feb 9 19:04:50.101437 env[1623]: time="2024-02-09T19:04:50.101382673Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:04:50.131308 env[1623]: time="2024-02-09T19:04:50.131253598Z" level=info msg="CreateContainer within sandbox \"d5e11e39e2e36bb14180b29041a8b1dbb28718cfd2fe363edf2df1ce31a94fef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156\"" Feb 9 19:04:50.132055 env[1623]: time="2024-02-09T19:04:50.132016593Z" level=info msg="StartContainer for \"ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156\"" Feb 9 19:04:50.162352 systemd[1]: Started cri-containerd-ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156.scope. Feb 9 19:04:50.210771 env[1623]: time="2024-02-09T19:04:50.210686199Z" level=info msg="StartContainer for \"ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156\" returns successfully" Feb 9 19:04:50.381477 kubelet[2051]: W0209 19:04:50.381275 2051 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e7754c_9316_489d_a97f_02982d71b180.slice/cri-containerd-81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34.scope WatchSource:0}: task 81c795f62a1f39cf86297fd5b44e002f4af05afeaf5b4cfc8ebaa7ac6b88df34 not found: not found Feb 9 19:04:50.403502 kubelet[2051]: E0209 19:04:50.403433 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:50.906842 systemd[1]: run-containerd-runc-k8s.io-ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156-runc.4KYZuT.mount: Deactivated successfully. Feb 9 19:04:50.919811 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:04:51.118939 kubelet[2051]: I0209 19:04:51.118907 2051 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cmmdx" podStartSLOduration=6.118834485 pod.CreationTimestamp="2024-02-09 19:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:51.118002857 +0000 UTC m=+116.333574179" watchObservedRunningTime="2024-02-09 19:04:51.118834485 +0000 UTC m=+116.334405806" Feb 9 19:04:51.404150 kubelet[2051]: E0209 19:04:51.404109 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:51.665982 systemd[1]: run-containerd-runc-k8s.io-ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156-runc.c7SIDA.mount: Deactivated successfully. Feb 9 19:04:52.405015 kubelet[2051]: E0209 19:04:52.404984 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:53.406292 kubelet[2051]: E0209 19:04:53.406254 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:53.498868 kubelet[2051]: W0209 19:04:53.498822 2051 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e7754c_9316_489d_a97f_02982d71b180.slice/cri-containerd-1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164.scope WatchSource:0}: task 1830878ec76875789c25c7961c0fb05da69802e34e7370e10b0d18021ca9e164 not found: not found Feb 9 19:04:53.979941 systemd[1]: run-containerd-runc-k8s.io-ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156-runc.Db2CVq.mount: Deactivated successfully. Feb 9 19:04:54.059005 (udev-worker)[4908]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:54.068139 (udev-worker)[4907]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:54.077110 systemd-networkd[1448]: lxc_health: Link UP Feb 9 19:04:54.128756 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:04:54.128938 systemd-networkd[1448]: lxc_health: Gained carrier Feb 9 19:04:54.407117 kubelet[2051]: E0209 19:04:54.407077 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:55.179824 systemd-networkd[1448]: lxc_health: Gained IPv6LL Feb 9 19:04:55.308854 kubelet[2051]: E0209 19:04:55.308818 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:55.332270 env[1623]: time="2024-02-09T19:04:55.331967936Z" level=info msg="StopPodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\"" Feb 9 19:04:55.332270 env[1623]: time="2024-02-09T19:04:55.332126333Z" level=info msg="TearDown network for sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" successfully" Feb 9 19:04:55.332270 env[1623]: time="2024-02-09T19:04:55.332190721Z" level=info msg="StopPodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" returns successfully" Feb 9 19:04:55.335447 env[1623]: time="2024-02-09T19:04:55.334941296Z" level=info msg="RemovePodSandbox for \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\"" Feb 9 19:04:55.335447 env[1623]: time="2024-02-09T19:04:55.334985010Z" level=info msg="Forcibly stopping sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\"" Feb 9 19:04:55.335447 env[1623]: time="2024-02-09T19:04:55.335116663Z" level=info msg="TearDown network for sandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" successfully" Feb 9 19:04:55.341991 env[1623]: time="2024-02-09T19:04:55.341936693Z" level=info msg="RemovePodSandbox \"6c26249f45b26ad0bfe3dfd6b53e65a25f107f0ca19c77474a75897485d9b5c9\" returns successfully" Feb 9 19:04:55.342735 env[1623]: time="2024-02-09T19:04:55.342705633Z" level=info msg="StopPodSandbox for \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\"" Feb 9 19:04:55.343003 env[1623]: time="2024-02-09T19:04:55.342952008Z" level=info msg="TearDown network for sandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" successfully" Feb 9 19:04:55.343117 env[1623]: time="2024-02-09T19:04:55.343095616Z" level=info msg="StopPodSandbox for \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" returns successfully" Feb 9 19:04:55.343602 env[1623]: time="2024-02-09T19:04:55.343563311Z" level=info msg="RemovePodSandbox for \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\"" Feb 9 19:04:55.343768 env[1623]: time="2024-02-09T19:04:55.343723558Z" level=info msg="Forcibly stopping sandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\"" Feb 9 19:04:55.343953 env[1623]: time="2024-02-09T19:04:55.343915022Z" level=info msg="TearDown network for sandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" successfully" Feb 9 19:04:55.349167 env[1623]: time="2024-02-09T19:04:55.349098617Z" level=info msg="RemovePodSandbox \"ebd5ffeb3b262a5299f9697b1a1f052a90ff5063749e653aeccfce58c842d59e\" returns successfully" Feb 9 19:04:55.408851 kubelet[2051]: E0209 19:04:55.408739 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:56.423897 kubelet[2051]: E0209 19:04:56.423859 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:56.627414 kubelet[2051]: W0209 19:04:56.622294 2051 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e7754c_9316_489d_a97f_02982d71b180.slice/cri-containerd-5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e.scope WatchSource:0}: task 5eefdae032d2352d612dcda3a2ee88418e97fef1e50ab5f07bf980bb07ab900e not found: not found Feb 9 19:04:57.425022 kubelet[2051]: E0209 19:04:57.424975 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:58.425695 kubelet[2051]: E0209 19:04:58.425642 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:58.716538 systemd[1]: run-containerd-runc-k8s.io-ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156-runc.TeUEA0.mount: Deactivated successfully. Feb 9 19:04:59.426881 kubelet[2051]: E0209 19:04:59.426828 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:04:59.742236 kubelet[2051]: W0209 19:04:59.741797 2051 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76e7754c_9316_489d_a97f_02982d71b180.slice/cri-containerd-f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174.scope WatchSource:0}: task f5ad29d737de33055c9fe0bc33f0d929a89f732fb312545e6350a6fd7e7f6174 not found: not found Feb 9 19:05:00.427783 kubelet[2051]: E0209 19:05:00.427748 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:00.975405 systemd[1]: run-containerd-runc-k8s.io-ca32554d1cc8fb21d97767d471281d65cda6d6ef99068abb2e64ffcc6d234156-runc.xbUQ2t.mount: Deactivated successfully. Feb 9 19:05:01.428661 kubelet[2051]: E0209 19:05:01.428608 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:02.429594 kubelet[2051]: E0209 19:05:02.429541 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:03.430563 kubelet[2051]: E0209 19:05:03.430513 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:04.431271 kubelet[2051]: E0209 19:05:04.431218 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:05.432295 kubelet[2051]: E0209 19:05:05.432245 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:06.432758 kubelet[2051]: E0209 19:05:06.432703 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:07.434166 kubelet[2051]: E0209 19:05:07.434116 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:08.435089 kubelet[2051]: E0209 19:05:08.435032 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:09.435868 kubelet[2051]: E0209 19:05:09.435772 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:10.436350 kubelet[2051]: E0209 19:05:10.436297 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:11.437039 kubelet[2051]: E0209 19:05:11.436988 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:12.437838 kubelet[2051]: E0209 19:05:12.437784 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:13.438164 kubelet[2051]: E0209 19:05:13.437993 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:14.439311 kubelet[2051]: E0209 19:05:14.439260 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:15.309241 kubelet[2051]: E0209 19:05:15.309150 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:15.440472 kubelet[2051]: E0209 19:05:15.440409 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:16.441590 kubelet[2051]: E0209 19:05:16.441535 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:17.442221 kubelet[2051]: E0209 19:05:17.442168 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:18.442740 kubelet[2051]: E0209 19:05:18.442692 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:19.443262 kubelet[2051]: E0209 19:05:19.443180 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:20.443400 kubelet[2051]: E0209 19:05:20.443345 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:20.768506 kubelet[2051]: E0209 19:05:20.768324 2051 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:05:21.443761 kubelet[2051]: E0209 19:05:21.443708 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:22.444808 kubelet[2051]: E0209 19:05:22.444540 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:23.446064 kubelet[2051]: E0209 19:05:23.445879 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:24.447097 kubelet[2051]: E0209 19:05:24.447043 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:25.447291 kubelet[2051]: E0209 19:05:25.447236 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:26.447807 kubelet[2051]: E0209 19:05:26.447741 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:27.448125 kubelet[2051]: E0209 19:05:27.448069 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:28.448771 kubelet[2051]: E0209 19:05:28.448724 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:29.449445 kubelet[2051]: E0209 19:05:29.449394 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:30.449632 kubelet[2051]: E0209 19:05:30.449579 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:30.769918 kubelet[2051]: E0209 19:05:30.769617 2051 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:05:31.450205 kubelet[2051]: E0209 19:05:31.450154 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:32.451217 kubelet[2051]: E0209 19:05:32.451162 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:33.452214 kubelet[2051]: E0209 19:05:33.452158 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:34.453096 kubelet[2051]: E0209 19:05:34.453039 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:35.308795 kubelet[2051]: E0209 19:05:35.308749 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:35.453413 kubelet[2051]: E0209 19:05:35.453361 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:36.454098 kubelet[2051]: E0209 19:05:36.454053 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:37.455159 kubelet[2051]: E0209 19:05:37.455111 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:38.456263 kubelet[2051]: E0209 19:05:38.456214 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:39.456766 kubelet[2051]: E0209 19:05:39.456724 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:40.457345 kubelet[2051]: E0209 19:05:40.457292 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:40.770414 kubelet[2051]: E0209 19:05:40.770163 2051 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:05:41.458016 kubelet[2051]: E0209 19:05:41.457972 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:42.458166 kubelet[2051]: E0209 19:05:42.458096 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:43.459197 kubelet[2051]: E0209 19:05:43.459151 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:44.117431 kubelet[2051]: E0209 19:05:44.117363 2051 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": unexpected EOF Feb 9 19:05:44.129657 kubelet[2051]: E0209 19:05:44.128088 2051 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": read tcp 172.31.23.81:37192->172.31.31.36:6443: read: connection reset by peer Feb 9 19:05:44.129657 kubelet[2051]: I0209 19:05:44.128139 2051 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 9 19:05:44.129657 kubelet[2051]: E0209 19:05:44.129105 2051 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:05:44.330089 kubelet[2051]: E0209 19:05:44.330043 2051 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:05:44.460112 kubelet[2051]: E0209 19:05:44.459996 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:44.731659 kubelet[2051]: E0209 19:05:44.731373 2051 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:05:45.460281 kubelet[2051]: E0209 19:05:45.460132 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:46.461278 kubelet[2051]: E0209 19:05:46.461186 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:47.462043 kubelet[2051]: E0209 19:05:47.461995 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:48.462511 kubelet[2051]: E0209 19:05:48.462468 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:49.463026 kubelet[2051]: E0209 19:05:49.462971 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:50.463924 kubelet[2051]: E0209 19:05:50.463871 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:51.464923 kubelet[2051]: E0209 19:05:51.464883 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:52.465591 kubelet[2051]: E0209 19:05:52.465539 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:53.465993 kubelet[2051]: E0209 19:05:53.465910 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:54.466301 kubelet[2051]: E0209 19:05:54.466246 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:55.308471 kubelet[2051]: E0209 19:05:55.308420 2051 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:55.466417 kubelet[2051]: E0209 19:05:55.466377 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:55.532361 kubelet[2051]: E0209 19:05:55.532300 2051 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.81?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 9 19:05:56.466606 kubelet[2051]: E0209 19:05:56.466551 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:57.466855 kubelet[2051]: E0209 19:05:57.466808 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:58.467480 kubelet[2051]: E0209 19:05:58.467440 2051 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"