Feb 12 21:55:08.108533 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 21:55:08.108565 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:55:08.108581 kernel: BIOS-provided physical RAM map: Feb 12 21:55:08.108592 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 21:55:08.108602 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 21:55:08.108613 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 21:55:08.108630 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 12 21:55:08.108641 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 12 21:55:08.108653 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 12 21:55:08.108664 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 21:55:08.108675 kernel: NX (Execute Disable) protection: active Feb 12 21:55:08.108687 kernel: SMBIOS 2.7 present. Feb 12 21:55:08.108698 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 12 21:55:08.108710 kernel: Hypervisor detected: KVM Feb 12 21:55:08.108727 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 21:55:08.108740 kernel: kvm-clock: cpu 0, msr 61faa001, primary cpu clock Feb 12 21:55:08.108751 kernel: kvm-clock: using sched offset of 6799371162 cycles Feb 12 21:55:08.108764 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 21:55:08.108777 kernel: tsc: Detected 2500.004 MHz processor Feb 12 21:55:08.108790 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 21:55:08.108806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 21:55:08.108818 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 12 21:55:08.108831 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 21:55:08.108844 kernel: Using GB pages for direct mapping Feb 12 21:55:08.108857 kernel: ACPI: Early table checksum verification disabled Feb 12 21:55:08.108870 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 12 21:55:08.108883 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 12 21:55:08.108896 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 21:55:08.108909 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 12 21:55:08.108925 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 12 21:55:08.108938 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 12 21:55:08.108950 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 21:55:08.108964 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 12 21:55:08.108976 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 21:55:08.108989 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 12 21:55:08.109002 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 12 21:55:08.109015 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 12 21:55:08.109031 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 12 21:55:08.109044 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 12 21:55:08.109057 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 12 21:55:08.109075 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 12 21:55:08.109089 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 12 21:55:08.109103 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 12 21:55:08.109117 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 12 21:55:08.109133 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 12 21:55:08.109147 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 12 21:55:08.109161 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 12 21:55:08.109175 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 21:55:08.109188 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 21:55:08.109202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 12 21:55:08.109216 kernel: NUMA: Initialized distance table, cnt=1 Feb 12 21:55:08.109230 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 12 21:55:08.109246 kernel: Zone ranges: Feb 12 21:55:08.109260 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 21:55:08.109274 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 12 21:55:08.109288 kernel: Normal empty Feb 12 21:55:08.109301 kernel: Movable zone start for each node Feb 12 21:55:08.109315 kernel: Early memory node ranges Feb 12 21:55:08.109336 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 21:55:08.109350 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 12 21:55:08.109364 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 12 21:55:08.109381 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 21:55:08.109395 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 21:55:08.109409 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 12 21:55:08.109423 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 21:55:08.109451 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 21:55:08.109463 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 12 21:55:08.109474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 21:55:08.109486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 21:55:08.109498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 21:55:08.109514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 21:55:08.109591 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 21:55:08.109605 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 21:55:08.109617 kernel: TSC deadline timer available Feb 12 21:55:08.109628 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 21:55:08.109642 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 12 21:55:08.109654 kernel: Booting paravirtualized kernel on KVM Feb 12 21:55:08.109667 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 21:55:08.109680 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 21:55:08.109697 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 21:55:08.109710 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 21:55:08.109723 kernel: pcpu-alloc: [0] 0 1 Feb 12 21:55:08.109736 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 12 21:55:08.109748 kernel: kvm-guest: PV spinlocks enabled Feb 12 21:55:08.109761 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 21:55:08.109774 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 12 21:55:08.109786 kernel: Policy zone: DMA32 Feb 12 21:55:08.109802 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:55:08.109819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 21:55:08.109831 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 21:55:08.109844 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 21:55:08.109856 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 21:55:08.109870 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 12 21:55:08.109883 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 21:55:08.109895 kernel: Kernel/User page tables isolation: enabled Feb 12 21:55:08.109908 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 21:55:08.109924 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 21:55:08.109936 kernel: rcu: Hierarchical RCU implementation. Feb 12 21:55:08.109950 kernel: rcu: RCU event tracing is enabled. Feb 12 21:55:08.109965 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 21:55:08.109980 kernel: Rude variant of Tasks RCU enabled. Feb 12 21:55:08.109993 kernel: Tracing variant of Tasks RCU enabled. Feb 12 21:55:08.110006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 21:55:08.110020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 21:55:08.110034 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 21:55:08.110051 kernel: random: crng init done Feb 12 21:55:08.110063 kernel: Console: colour VGA+ 80x25 Feb 12 21:55:08.110075 kernel: printk: console [ttyS0] enabled Feb 12 21:55:08.110086 kernel: ACPI: Core revision 20210730 Feb 12 21:55:08.110098 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 12 21:55:08.110110 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 21:55:08.110122 kernel: x2apic enabled Feb 12 21:55:08.110134 kernel: Switched APIC routing to physical x2apic. Feb 12 21:55:08.110145 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 12 21:55:08.110252 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Feb 12 21:55:08.110271 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 21:55:08.110284 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 21:55:08.110296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 21:55:08.110318 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 21:55:08.110333 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 21:55:08.110345 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 21:55:08.110358 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 21:55:08.110370 kernel: RETBleed: Vulnerable Feb 12 21:55:08.110382 kernel: Speculative Store Bypass: Vulnerable Feb 12 21:55:08.110394 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:55:08.110406 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:55:08.110418 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 21:55:08.110445 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 21:55:08.110460 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 21:55:08.110473 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 21:55:08.110485 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 12 21:55:08.110498 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 12 21:55:08.110511 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 21:55:08.110525 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 21:55:08.110540 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 21:55:08.110553 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 12 21:55:08.110565 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 21:55:08.110578 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 12 21:55:08.110591 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 12 21:55:08.110605 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 12 21:55:08.110619 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 12 21:55:08.110633 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 12 21:55:08.110647 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 12 21:55:08.110661 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 12 21:55:08.110675 kernel: Freeing SMP alternatives memory: 32K Feb 12 21:55:08.110691 kernel: pid_max: default: 32768 minimum: 301 Feb 12 21:55:08.110705 kernel: LSM: Security Framework initializing Feb 12 21:55:08.110719 kernel: SELinux: Initializing. Feb 12 21:55:08.110733 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:55:08.110747 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:55:08.110761 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 21:55:08.110775 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 21:55:08.110790 kernel: signal: max sigframe size: 3632 Feb 12 21:55:08.110805 kernel: rcu: Hierarchical SRCU implementation. Feb 12 21:55:08.110817 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 21:55:08.110831 kernel: smp: Bringing up secondary CPUs ... Feb 12 21:55:08.110849 kernel: x86: Booting SMP configuration: Feb 12 21:55:08.110863 kernel: .... node #0, CPUs: #1 Feb 12 21:55:08.110877 kernel: kvm-clock: cpu 1, msr 61faa041, secondary cpu clock Feb 12 21:55:08.110892 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 12 21:55:08.110907 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 12 21:55:08.110922 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 21:55:08.110937 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 21:55:08.110950 kernel: smpboot: Max logical packages: 1 Feb 12 21:55:08.110968 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Feb 12 21:55:08.110982 kernel: devtmpfs: initialized Feb 12 21:55:08.110997 kernel: x86/mm: Memory block size: 128MB Feb 12 21:55:08.111012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 21:55:08.111027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 21:55:08.111042 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 21:55:08.111056 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 21:55:08.111070 kernel: audit: initializing netlink subsys (disabled) Feb 12 21:55:08.111085 kernel: audit: type=2000 audit(1707774906.750:1): state=initialized audit_enabled=0 res=1 Feb 12 21:55:08.111102 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 21:55:08.111117 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 21:55:08.111132 kernel: cpuidle: using governor menu Feb 12 21:55:08.111147 kernel: ACPI: bus type PCI registered Feb 12 21:55:08.111162 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 21:55:08.111176 kernel: dca service started, version 1.12.1 Feb 12 21:55:08.111191 kernel: PCI: Using configuration type 1 for base access Feb 12 21:55:08.111206 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 21:55:08.111221 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 21:55:08.111238 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 21:55:08.111252 kernel: ACPI: Added _OSI(Module Device) Feb 12 21:55:08.111267 kernel: ACPI: Added _OSI(Processor Device) Feb 12 21:55:08.111281 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 21:55:08.111296 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 21:55:08.111311 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 21:55:08.111326 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 21:55:08.111340 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 21:55:08.111355 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 12 21:55:08.111372 kernel: ACPI: Interpreter enabled Feb 12 21:55:08.111386 kernel: ACPI: PM: (supports S0 S5) Feb 12 21:55:08.111400 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 21:55:08.111415 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 21:55:08.111440 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 12 21:55:08.111455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 21:55:08.111641 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 21:55:08.111772 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 21:55:08.111795 kernel: acpiphp: Slot [3] registered Feb 12 21:55:08.111810 kernel: acpiphp: Slot [4] registered Feb 12 21:55:08.111825 kernel: acpiphp: Slot [5] registered Feb 12 21:55:08.111840 kernel: acpiphp: Slot [6] registered Feb 12 21:55:08.111854 kernel: acpiphp: Slot [7] registered Feb 12 21:55:08.111869 kernel: acpiphp: Slot [8] registered Feb 12 21:55:08.111884 kernel: acpiphp: Slot [9] registered Feb 12 21:55:08.111899 kernel: acpiphp: Slot [10] registered Feb 12 21:55:08.111914 kernel: acpiphp: Slot [11] registered Feb 12 21:55:08.111931 kernel: acpiphp: Slot [12] registered Feb 12 21:55:08.111945 kernel: acpiphp: Slot [13] registered Feb 12 21:55:08.111960 kernel: acpiphp: Slot [14] registered Feb 12 21:55:08.111974 kernel: acpiphp: Slot [15] registered Feb 12 21:55:08.111989 kernel: acpiphp: Slot [16] registered Feb 12 21:55:08.112003 kernel: acpiphp: Slot [17] registered Feb 12 21:55:08.112018 kernel: acpiphp: Slot [18] registered Feb 12 21:55:08.112032 kernel: acpiphp: Slot [19] registered Feb 12 21:55:08.112047 kernel: acpiphp: Slot [20] registered Feb 12 21:55:08.112062 kernel: acpiphp: Slot [21] registered Feb 12 21:55:08.112079 kernel: acpiphp: Slot [22] registered Feb 12 21:55:08.112093 kernel: acpiphp: Slot [23] registered Feb 12 21:55:08.112108 kernel: acpiphp: Slot [24] registered Feb 12 21:55:08.112123 kernel: acpiphp: Slot [25] registered Feb 12 21:55:08.112138 kernel: acpiphp: Slot [26] registered Feb 12 21:55:08.112152 kernel: acpiphp: Slot [27] registered Feb 12 21:55:08.112167 kernel: acpiphp: Slot [28] registered Feb 12 21:55:08.112182 kernel: acpiphp: Slot [29] registered Feb 12 21:55:08.112196 kernel: acpiphp: Slot [30] registered Feb 12 21:55:08.112261 kernel: acpiphp: Slot [31] registered Feb 12 21:55:08.112277 kernel: PCI host bridge to bus 0000:00 Feb 12 21:55:08.112413 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 21:55:08.112548 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 21:55:08.112659 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 21:55:08.112770 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 21:55:08.112881 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 21:55:08.113021 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 21:55:08.113164 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 21:55:08.113303 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 12 21:55:08.113478 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 21:55:08.113700 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 21:55:08.113886 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 12 21:55:08.114009 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 12 21:55:08.114171 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 12 21:55:08.114369 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 12 21:55:08.114606 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 12 21:55:08.115115 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 12 21:55:08.115244 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 14648 usecs Feb 12 21:55:08.115369 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 12 21:55:08.115503 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 12 21:55:08.115704 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 21:55:08.117366 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 21:55:08.117551 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 21:55:08.117694 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 12 21:55:08.117841 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 21:55:08.117976 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 12 21:55:08.118002 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 21:55:08.118019 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 21:55:08.118035 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 21:55:08.118050 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 21:55:08.118065 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 21:55:08.118080 kernel: iommu: Default domain type: Translated Feb 12 21:55:08.118274 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 21:55:08.118440 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 12 21:55:08.118661 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 21:55:08.118808 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 12 21:55:08.118829 kernel: vgaarb: loaded Feb 12 21:55:08.118845 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 21:55:08.118861 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 21:55:08.118875 kernel: PTP clock support registered Feb 12 21:55:08.118889 kernel: PCI: Using ACPI for IRQ routing Feb 12 21:55:08.118904 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 21:55:08.118919 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 21:55:08.118933 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 12 21:55:08.118952 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 12 21:55:08.118967 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 12 21:55:08.118979 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 21:55:08.118992 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 21:55:08.119006 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 21:55:08.119020 kernel: pnp: PnP ACPI init Feb 12 21:55:08.119033 kernel: pnp: PnP ACPI: found 5 devices Feb 12 21:55:08.119048 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 21:55:08.119064 kernel: NET: Registered PF_INET protocol family Feb 12 21:55:08.119079 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 21:55:08.119093 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 21:55:08.119107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 21:55:08.119122 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 21:55:08.119136 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 21:55:08.119151 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 21:55:08.119166 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:55:08.119182 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:55:08.119201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 21:55:08.119217 kernel: NET: Registered PF_XDP protocol family Feb 12 21:55:08.119366 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 21:55:08.119514 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 21:55:08.119625 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 21:55:08.119733 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 21:55:08.119856 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 21:55:08.119978 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 21:55:08.120001 kernel: PCI: CLS 0 bytes, default 64 Feb 12 21:55:08.120016 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 21:55:08.120030 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Feb 12 21:55:08.120044 kernel: clocksource: Switched to clocksource tsc Feb 12 21:55:08.120058 kernel: Initialise system trusted keyrings Feb 12 21:55:08.120071 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 21:55:08.120086 kernel: Key type asymmetric registered Feb 12 21:55:08.120099 kernel: Asymmetric key parser 'x509' registered Feb 12 21:55:08.120116 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 21:55:08.120131 kernel: io scheduler mq-deadline registered Feb 12 21:55:08.120145 kernel: io scheduler kyber registered Feb 12 21:55:08.120160 kernel: io scheduler bfq registered Feb 12 21:55:08.120175 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 21:55:08.120189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 21:55:08.120204 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 21:55:08.120219 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 21:55:08.120233 kernel: i8042: Warning: Keylock active Feb 12 21:55:08.120251 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 21:55:08.120265 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 21:55:08.120398 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 12 21:55:08.120529 kernel: rtc_cmos 00:00: registered as rtc0 Feb 12 21:55:08.120643 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T21:55:07 UTC (1707774907) Feb 12 21:55:08.120756 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 12 21:55:08.120774 kernel: intel_pstate: CPU model not supported Feb 12 21:55:08.120789 kernel: NET: Registered PF_INET6 protocol family Feb 12 21:55:08.120808 kernel: Segment Routing with IPv6 Feb 12 21:55:08.120822 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 21:55:08.120908 kernel: NET: Registered PF_PACKET protocol family Feb 12 21:55:08.120925 kernel: Key type dns_resolver registered Feb 12 21:55:08.120939 kernel: IPI shorthand broadcast: enabled Feb 12 21:55:08.120954 kernel: sched_clock: Marking stable (554121649, 317665431)->(988900739, -117113659) Feb 12 21:55:08.120967 kernel: registered taskstats version 1 Feb 12 21:55:08.120982 kernel: Loading compiled-in X.509 certificates Feb 12 21:55:08.120997 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 21:55:08.121015 kernel: Key type .fscrypt registered Feb 12 21:55:08.121030 kernel: Key type fscrypt-provisioning registered Feb 12 21:55:08.121044 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 21:55:08.121060 kernel: ima: Allocated hash algorithm: sha1 Feb 12 21:55:08.121075 kernel: ima: No architecture policies found Feb 12 21:55:08.121089 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 21:55:08.121104 kernel: Write protecting the kernel read-only data: 28672k Feb 12 21:55:08.121118 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 21:55:08.121133 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 21:55:08.121152 kernel: Run /init as init process Feb 12 21:55:08.121168 kernel: with arguments: Feb 12 21:55:08.121184 kernel: /init Feb 12 21:55:08.121199 kernel: with environment: Feb 12 21:55:08.121213 kernel: HOME=/ Feb 12 21:55:08.121226 kernel: TERM=linux Feb 12 21:55:08.121240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 21:55:08.121258 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:55:08.121278 systemd[1]: Detected virtualization amazon. Feb 12 21:55:08.121293 systemd[1]: Detected architecture x86-64. Feb 12 21:55:08.121307 systemd[1]: Running in initrd. Feb 12 21:55:08.121330 systemd[1]: No hostname configured, using default hostname. Feb 12 21:55:08.121359 systemd[1]: Hostname set to . Feb 12 21:55:08.121380 systemd[1]: Initializing machine ID from VM UUID. Feb 12 21:55:08.121395 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 21:55:08.121411 systemd[1]: Queued start job for default target initrd.target. Feb 12 21:55:08.121426 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:55:08.121454 systemd[1]: Reached target cryptsetup.target. Feb 12 21:55:08.121469 systemd[1]: Reached target paths.target. Feb 12 21:55:08.121484 systemd[1]: Reached target slices.target. Feb 12 21:55:08.121498 systemd[1]: Reached target swap.target. Feb 12 21:55:08.121513 systemd[1]: Reached target timers.target. Feb 12 21:55:08.121532 systemd[1]: Listening on iscsid.socket. Feb 12 21:55:08.121548 systemd[1]: Listening on iscsiuio.socket. Feb 12 21:55:08.121564 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:55:08.121579 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:55:08.121594 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:55:08.121610 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:55:08.121628 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:55:08.121644 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:55:08.121662 systemd[1]: Reached target sockets.target. Feb 12 21:55:08.121678 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:55:08.121694 systemd[1]: Finished network-cleanup.service. Feb 12 21:55:08.121710 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 21:55:08.121726 systemd[1]: Starting systemd-journald.service... Feb 12 21:55:08.121742 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:55:08.121758 systemd[1]: Starting systemd-resolved.service... Feb 12 21:55:08.121774 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 21:55:08.121796 systemd-journald[185]: Journal started Feb 12 21:55:08.121876 systemd-journald[185]: Runtime Journal (/run/log/journal/ec21fa27ccfb26117fee64d8fab30023) is 4.8M, max 38.7M, 33.9M free. Feb 12 21:55:08.126451 systemd[1]: Started systemd-journald.service. Feb 12 21:55:08.153650 systemd-modules-load[186]: Inserted module 'overlay' Feb 12 21:55:08.329548 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 21:55:08.329605 kernel: Bridge firewalling registered Feb 12 21:55:08.329624 kernel: SCSI subsystem initialized Feb 12 21:55:08.329666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 21:55:08.329690 kernel: device-mapper: uevent: version 1.0.3 Feb 12 21:55:08.330231 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 21:55:08.330259 kernel: audit: type=1130 audit(1707774908.326:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.156607 systemd-resolved[187]: Positive Trust Anchors: Feb 12 21:55:08.344235 kernel: audit: type=1130 audit(1707774908.326:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.156621 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:55:08.351986 kernel: audit: type=1130 audit(1707774908.344:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.156670 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:55:08.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.161451 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 12 21:55:08.368873 kernel: audit: type=1130 audit(1707774908.358:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.202106 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 12 21:55:08.376268 kernel: audit: type=1130 audit(1707774908.368:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.258651 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 12 21:55:08.327235 systemd[1]: Started systemd-resolved.service. Feb 12 21:55:08.327672 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:55:08.391064 kernel: audit: type=1130 audit(1707774908.380:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.352069 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 21:55:08.359594 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:55:08.376397 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 21:55:08.391133 systemd[1]: Reached target nss-lookup.target. Feb 12 21:55:08.401341 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 21:55:08.403328 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:55:08.406451 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:55:08.432356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:55:08.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.440462 kernel: audit: type=1130 audit(1707774908.433:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.443324 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:55:08.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.449451 kernel: audit: type=1130 audit(1707774908.442:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.456904 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 21:55:08.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.466207 kernel: audit: type=1130 audit(1707774908.458:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.459772 systemd[1]: Starting dracut-cmdline.service... Feb 12 21:55:08.476277 dracut-cmdline[206]: dracut-dracut-053 Feb 12 21:55:08.480272 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:55:08.584454 kernel: Loading iSCSI transport class v2.0-870. Feb 12 21:55:08.604502 kernel: iscsi: registered transport (tcp) Feb 12 21:55:08.645978 kernel: iscsi: registered transport (qla4xxx) Feb 12 21:55:08.646058 kernel: QLogic iSCSI HBA Driver Feb 12 21:55:08.688135 systemd[1]: Finished dracut-cmdline.service. Feb 12 21:55:08.689665 systemd[1]: Starting dracut-pre-udev.service... Feb 12 21:55:08.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:08.751490 kernel: raid6: avx512x4 gen() 7384 MB/s Feb 12 21:55:08.772625 kernel: raid6: avx512x4 xor() 3652 MB/s Feb 12 21:55:08.790483 kernel: raid6: avx512x2 gen() 14170 MB/s Feb 12 21:55:08.808758 kernel: raid6: avx512x2 xor() 11137 MB/s Feb 12 21:55:08.827496 kernel: raid6: avx512x1 gen() 7646 MB/s Feb 12 21:55:08.845479 kernel: raid6: avx512x1 xor() 13797 MB/s Feb 12 21:55:08.864493 kernel: raid6: avx2x4 gen() 3953 MB/s Feb 12 21:55:08.887195 kernel: raid6: avx2x4 xor() 3158 MB/s Feb 12 21:55:08.905735 kernel: raid6: avx2x2 gen() 8230 MB/s Feb 12 21:55:08.924475 kernel: raid6: avx2x2 xor() 7857 MB/s Feb 12 21:55:08.942493 kernel: raid6: avx2x1 gen() 5875 MB/s Feb 12 21:55:08.961559 kernel: raid6: avx2x1 xor() 7645 MB/s Feb 12 21:55:08.981483 kernel: raid6: sse2x4 gen() 4694 MB/s Feb 12 21:55:08.998473 kernel: raid6: sse2x4 xor() 5005 MB/s Feb 12 21:55:09.016480 kernel: raid6: sse2x2 gen() 8926 MB/s Feb 12 21:55:09.033480 kernel: raid6: sse2x2 xor() 4179 MB/s Feb 12 21:55:09.051485 kernel: raid6: sse2x1 gen() 8969 MB/s Feb 12 21:55:09.069610 kernel: raid6: sse2x1 xor() 3770 MB/s Feb 12 21:55:09.069684 kernel: raid6: using algorithm avx512x2 gen() 14170 MB/s Feb 12 21:55:09.069703 kernel: raid6: .... xor() 11137 MB/s, rmw enabled Feb 12 21:55:09.070621 kernel: raid6: using avx512x2 recovery algorithm Feb 12 21:55:09.087461 kernel: xor: automatically using best checksumming function avx Feb 12 21:55:09.294674 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 21:55:09.315256 systemd[1]: Finished dracut-pre-udev.service. Feb 12 21:55:09.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:09.317000 audit: BPF prog-id=7 op=LOAD Feb 12 21:55:09.317000 audit: BPF prog-id=8 op=LOAD Feb 12 21:55:09.319141 systemd[1]: Starting systemd-udevd.service... Feb 12 21:55:09.336651 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 21:55:09.348981 systemd[1]: Started systemd-udevd.service. Feb 12 21:55:09.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:09.356682 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 21:55:09.391184 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Feb 12 21:55:09.433581 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 21:55:09.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:09.438417 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:55:09.498728 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:55:09.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:09.608583 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 21:55:09.608685 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 21:55:09.612208 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 21:55:09.632460 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 12 21:55:09.640006 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 21:55:09.640068 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ee:fa:e5:cd:0d Feb 12 21:55:09.640268 kernel: AES CTR mode by8 optimization enabled Feb 12 21:55:09.643040 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:55:09.671300 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 21:55:09.671636 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 21:55:09.682450 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 21:55:09.690221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 21:55:09.690290 kernel: GPT:9289727 != 16777215 Feb 12 21:55:09.690309 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 21:55:09.690327 kernel: GPT:9289727 != 16777215 Feb 12 21:55:09.690341 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 21:55:09.690364 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:55:09.760460 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (441) Feb 12 21:55:09.829993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 21:55:09.912015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 21:55:09.926891 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 21:55:09.927040 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 21:55:09.941671 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 21:55:09.944801 systemd[1]: Starting disk-uuid.service... Feb 12 21:55:09.952827 disk-uuid[593]: Primary Header is updated. Feb 12 21:55:09.952827 disk-uuid[593]: Secondary Entries is updated. Feb 12 21:55:09.952827 disk-uuid[593]: Secondary Header is updated. Feb 12 21:55:09.958471 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:55:09.963599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:55:10.979306 disk-uuid[594]: The operation has completed successfully. Feb 12 21:55:10.980766 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:55:11.173214 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 21:55:11.173343 systemd[1]: Finished disk-uuid.service. Feb 12 21:55:11.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.184759 systemd[1]: Starting verity-setup.service... Feb 12 21:55:11.209454 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 21:55:11.327428 systemd[1]: Found device dev-mapper-usr.device. Feb 12 21:55:11.337416 systemd[1]: Mounting sysusr-usr.mount... Feb 12 21:55:11.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.349925 systemd[1]: Finished verity-setup.service. Feb 12 21:55:11.492454 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 21:55:11.493798 systemd[1]: Mounted sysusr-usr.mount. Feb 12 21:55:11.494336 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 21:55:11.495290 systemd[1]: Starting ignition-setup.service... Feb 12 21:55:11.499772 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 21:55:11.531032 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:55:11.531098 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:55:11.531117 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:55:11.543460 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:55:11.561098 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 21:55:11.604465 systemd[1]: Finished ignition-setup.service. Feb 12 21:55:11.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.607374 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 21:55:11.641359 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 21:55:11.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.651000 audit: BPF prog-id=9 op=LOAD Feb 12 21:55:11.654198 systemd[1]: Starting systemd-networkd.service... Feb 12 21:55:11.711682 systemd-networkd[1106]: lo: Link UP Feb 12 21:55:11.711692 systemd-networkd[1106]: lo: Gained carrier Feb 12 21:55:11.713738 systemd-networkd[1106]: Enumeration completed Feb 12 21:55:11.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.714085 systemd[1]: Started systemd-networkd.service. Feb 12 21:55:11.715189 systemd[1]: Reached target network.target. Feb 12 21:55:11.716298 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 21:55:11.717114 systemd[1]: Starting iscsiuio.service... Feb 12 21:55:11.727099 systemd-networkd[1106]: eth0: Link UP Feb 12 21:55:11.727345 systemd-networkd[1106]: eth0: Gained carrier Feb 12 21:55:11.731772 systemd[1]: Started iscsiuio.service. Feb 12 21:55:11.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.733765 systemd[1]: Starting iscsid.service... Feb 12 21:55:11.738911 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:55:11.738911 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 21:55:11.738911 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 21:55:11.738911 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 21:55:11.738911 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:55:11.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.753914 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 21:55:11.748627 systemd[1]: Started iscsid.service. Feb 12 21:55:11.751566 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.23.213/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 21:55:11.753701 systemd[1]: Starting dracut-initqueue.service... Feb 12 21:55:11.768056 systemd[1]: Finished dracut-initqueue.service. Feb 12 21:55:11.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:11.768332 systemd[1]: Reached target remote-fs-pre.target. Feb 12 21:55:11.770292 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:55:11.772922 systemd[1]: Reached target remote-fs.target. Feb 12 21:55:11.777183 systemd[1]: Starting dracut-pre-mount.service... Feb 12 21:55:11.803767 systemd[1]: Finished dracut-pre-mount.service. Feb 12 21:55:11.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.228357 ignition[1102]: Ignition 2.14.0 Feb 12 21:55:12.228374 ignition[1102]: Stage: fetch-offline Feb 12 21:55:12.228781 ignition[1102]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:12.228825 ignition[1102]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:12.242883 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:12.244783 ignition[1102]: Ignition finished successfully Feb 12 21:55:12.246184 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 21:55:12.253130 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 21:55:12.253174 kernel: audit: type=1130 audit(1707774912.248:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.249675 systemd[1]: Starting ignition-fetch.service... Feb 12 21:55:12.262788 ignition[1130]: Ignition 2.14.0 Feb 12 21:55:12.262799 ignition[1130]: Stage: fetch Feb 12 21:55:12.262943 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:12.262968 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:12.270948 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:12.272529 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:12.286687 ignition[1130]: INFO : PUT result: OK Feb 12 21:55:12.289583 ignition[1130]: DEBUG : parsed url from cmdline: "" Feb 12 21:55:12.289583 ignition[1130]: INFO : no config URL provided Feb 12 21:55:12.289583 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 21:55:12.289583 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 21:55:12.294130 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:12.294130 ignition[1130]: INFO : PUT result: OK Feb 12 21:55:12.294130 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 21:55:12.297648 ignition[1130]: INFO : GET result: OK Feb 12 21:55:12.298572 ignition[1130]: DEBUG : parsing config with SHA512: c9d1ae2e0688d956f1b4049434daee1a53f9b10ceed642096762c3f7b050c6a88a7ca5319ab1712ee999422ebef521a11259fccad4bcb9cff63c90b223bfdcad Feb 12 21:55:12.325579 unknown[1130]: fetched base config from "system" Feb 12 21:55:12.325594 unknown[1130]: fetched base config from "system" Feb 12 21:55:12.325602 unknown[1130]: fetched user config from "aws" Feb 12 21:55:12.331459 ignition[1130]: fetch: fetch complete Feb 12 21:55:12.331743 ignition[1130]: fetch: fetch passed Feb 12 21:55:12.331847 ignition[1130]: Ignition finished successfully Feb 12 21:55:12.335761 systemd[1]: Finished ignition-fetch.service. Feb 12 21:55:12.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.337079 systemd[1]: Starting ignition-kargs.service... Feb 12 21:55:12.347118 kernel: audit: type=1130 audit(1707774912.335:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.356060 ignition[1136]: Ignition 2.14.0 Feb 12 21:55:12.356121 ignition[1136]: Stage: kargs Feb 12 21:55:12.356308 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:12.356333 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:12.367809 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:12.371513 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:12.374444 ignition[1136]: INFO : PUT result: OK Feb 12 21:55:12.381362 ignition[1136]: kargs: kargs passed Feb 12 21:55:12.381463 ignition[1136]: Ignition finished successfully Feb 12 21:55:12.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.382881 systemd[1]: Finished ignition-kargs.service. Feb 12 21:55:12.396015 kernel: audit: type=1130 audit(1707774912.386:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.388093 systemd[1]: Starting ignition-disks.service... Feb 12 21:55:12.402085 ignition[1142]: Ignition 2.14.0 Feb 12 21:55:12.402096 ignition[1142]: Stage: disks Feb 12 21:55:12.402285 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:12.402316 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:12.414316 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:12.416027 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:12.417574 ignition[1142]: INFO : PUT result: OK Feb 12 21:55:12.421354 ignition[1142]: disks: disks passed Feb 12 21:55:12.421420 ignition[1142]: Ignition finished successfully Feb 12 21:55:12.429254 systemd[1]: Finished ignition-disks.service. Feb 12 21:55:12.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.432771 systemd[1]: Reached target initrd-root-device.target. Feb 12 21:55:12.445295 kernel: audit: type=1130 audit(1707774912.432:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.445506 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:55:12.449180 systemd[1]: Reached target local-fs.target. Feb 12 21:55:12.450886 systemd[1]: Reached target sysinit.target. Feb 12 21:55:12.450974 systemd[1]: Reached target basic.target. Feb 12 21:55:12.454780 systemd[1]: Starting systemd-fsck-root.service... Feb 12 21:55:12.481505 systemd-fsck[1150]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 21:55:12.487538 systemd[1]: Finished systemd-fsck-root.service. Feb 12 21:55:12.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.489700 systemd[1]: Mounting sysroot.mount... Feb 12 21:55:12.495334 kernel: audit: type=1130 audit(1707774912.488:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.506449 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 21:55:12.508509 systemd[1]: Mounted sysroot.mount. Feb 12 21:55:12.509516 systemd[1]: Reached target initrd-root-fs.target. Feb 12 21:55:12.523520 systemd[1]: Mounting sysroot-usr.mount... Feb 12 21:55:12.525871 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 21:55:12.525940 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 21:55:12.525978 systemd[1]: Reached target ignition-diskful.target. Feb 12 21:55:12.538129 systemd[1]: Mounted sysroot-usr.mount. Feb 12 21:55:12.544899 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:55:12.550560 systemd[1]: Starting initrd-setup-root.service... Feb 12 21:55:12.564451 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Feb 12 21:55:12.567901 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:55:12.567949 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:55:12.567967 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:55:12.569293 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 21:55:12.576454 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:55:12.578840 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Feb 12 21:55:12.583673 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:55:12.590812 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 21:55:12.603018 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 21:55:12.736787 systemd[1]: Finished initrd-setup-root.service. Feb 12 21:55:12.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.739630 systemd[1]: Starting ignition-mount.service... Feb 12 21:55:12.748967 kernel: audit: type=1130 audit(1707774912.738:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.749280 systemd[1]: Starting sysroot-boot.service... Feb 12 21:55:12.753312 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 21:55:12.753494 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 21:55:12.782638 ignition[1233]: INFO : Ignition 2.14.0 Feb 12 21:55:12.782638 ignition[1233]: INFO : Stage: mount Feb 12 21:55:12.784825 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:12.784825 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:12.800783 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:12.802858 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:12.803161 systemd[1]: Finished sysroot-boot.service. Feb 12 21:55:12.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.810819 ignition[1233]: INFO : PUT result: OK Feb 12 21:55:12.811766 kernel: audit: type=1130 audit(1707774912.806:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.814371 ignition[1233]: INFO : mount: mount passed Feb 12 21:55:12.816119 ignition[1233]: INFO : Ignition finished successfully Feb 12 21:55:12.817856 systemd[1]: Finished ignition-mount.service. Feb 12 21:55:12.819093 systemd[1]: Starting ignition-files.service... Feb 12 21:55:12.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.827461 kernel: audit: type=1130 audit(1707774912.817:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:12.830556 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:55:12.848458 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) Feb 12 21:55:12.848515 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:55:12.851328 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:55:12.851365 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:55:12.858451 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:55:12.861062 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:55:12.874944 ignition[1262]: INFO : Ignition 2.14.0 Feb 12 21:55:12.874944 ignition[1262]: INFO : Stage: files Feb 12 21:55:12.877426 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:12.877426 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:12.892494 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:12.894723 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:12.896672 ignition[1262]: INFO : PUT result: OK Feb 12 21:55:12.900520 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping Feb 12 21:55:12.904944 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 21:55:12.906612 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 21:55:12.921360 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 21:55:12.923266 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 21:55:12.925658 unknown[1262]: wrote ssh authorized keys file for user: core Feb 12 21:55:12.927225 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 21:55:12.929641 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:55:12.932081 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:55:12.932081 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:55:12.932081 ignition[1262]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 21:55:13.122611 systemd-networkd[1106]: eth0: Gained IPv6LL Feb 12 21:55:13.401832 ignition[1262]: INFO : GET result: OK Feb 12 21:55:13.635470 ignition[1262]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 21:55:13.638274 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:55:13.638274 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:55:13.638274 ignition[1262]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 21:55:14.049003 ignition[1262]: INFO : GET result: OK Feb 12 21:55:14.160214 ignition[1262]: DEBUG : file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 21:55:14.164184 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:55:14.164184 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 21:55:14.164184 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:55:14.176563 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816396701" Feb 12 21:55:14.180669 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1267) Feb 12 21:55:14.180701 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816396701": device or resource busy Feb 12 21:55:14.180701 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3816396701", trying btrfs: device or resource busy Feb 12 21:55:14.180701 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816396701" Feb 12 21:55:14.192311 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3816396701" Feb 12 21:55:14.196648 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem3816396701" Feb 12 21:55:14.198050 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem3816396701" Feb 12 21:55:14.198050 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 21:55:14.198050 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:55:14.198050 ignition[1262]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 21:55:14.206643 systemd[1]: mnt-oem3816396701.mount: Deactivated successfully. Feb 12 21:55:14.324555 ignition[1262]: INFO : GET result: OK Feb 12 21:55:14.610196 ignition[1262]: DEBUG : file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 21:55:14.614048 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:55:14.614048 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:55:14.614048 ignition[1262]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 21:55:14.676300 ignition[1262]: INFO : GET result: OK Feb 12 21:55:15.388178 ignition[1262]: DEBUG : file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 21:55:15.391519 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:55:15.391519 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/install.sh" Feb 12 21:55:15.391519 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 21:55:15.391519 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:55:15.391519 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:55:15.403836 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:55:15.403836 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:55:15.403836 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 21:55:15.403836 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:55:15.403836 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596796984" Feb 12 21:55:15.403836 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596796984": device or resource busy Feb 12 21:55:15.403836 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem596796984", trying btrfs: device or resource busy Feb 12 21:55:15.403836 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596796984" Feb 12 21:55:15.403836 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem596796984" Feb 12 21:55:15.403836 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem596796984" Feb 12 21:55:15.425509 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem596796984" Feb 12 21:55:15.425509 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 21:55:15.425509 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 21:55:15.425509 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:55:15.437391 systemd[1]: mnt-oem596796984.mount: Deactivated successfully. Feb 12 21:55:15.448137 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem323498714" Feb 12 21:55:15.448137 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem323498714": device or resource busy Feb 12 21:55:15.448137 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem323498714", trying btrfs: device or resource busy Feb 12 21:55:15.448137 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem323498714" Feb 12 21:55:15.455925 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem323498714" Feb 12 21:55:15.455925 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem323498714" Feb 12 21:55:15.455925 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem323498714" Feb 12 21:55:15.455925 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 21:55:15.458684 systemd[1]: mnt-oem323498714.mount: Deactivated successfully. Feb 12 21:55:15.470019 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 21:55:15.473472 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:55:15.492110 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872036123" Feb 12 21:55:15.500003 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872036123": device or resource busy Feb 12 21:55:15.500003 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2872036123", trying btrfs: device or resource busy Feb 12 21:55:15.500003 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872036123" Feb 12 21:55:15.500003 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2872036123" Feb 12 21:55:15.511485 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem2872036123" Feb 12 21:55:15.511485 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem2872036123" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(12): [started] processing unit "nvidia.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(12): [finished] processing unit "nvidia.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(13): [started] processing unit "containerd.service" Feb 12 21:55:15.511485 ignition[1262]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(13): [finished] processing unit "containerd.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(17): [started] processing unit "prepare-critools.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(17): op(18): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(17): op(18): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(17): [finished] processing unit "prepare-critools.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 21:55:15.538282 ignition[1262]: INFO : files: op(1d): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 21:55:15.596959 ignition[1262]: INFO : files: op(1d): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 21:55:15.606595 ignition[1262]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:55:15.613456 ignition[1262]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:55:15.613456 ignition[1262]: INFO : files: files passed Feb 12 21:55:15.613456 ignition[1262]: INFO : Ignition finished successfully Feb 12 21:55:15.623035 systemd[1]: Finished ignition-files.service. Feb 12 21:55:15.637367 kernel: audit: type=1130 audit(1707774915.624:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.629533 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 21:55:15.642224 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 21:55:15.644842 systemd[1]: Starting ignition-quench.service... Feb 12 21:55:15.650864 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 21:55:15.652312 systemd[1]: Finished ignition-quench.service. Feb 12 21:55:15.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.664705 initrd-setup-root-after-ignition[1287]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 21:55:15.672670 kernel: audit: type=1130 audit(1707774915.654:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.672816 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 21:55:15.675304 systemd[1]: Reached target ignition-complete.target. Feb 12 21:55:15.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.678300 systemd[1]: Starting initrd-parse-etc.service... Feb 12 21:55:15.696862 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 21:55:15.696959 systemd[1]: Finished initrd-parse-etc.service. Feb 12 21:55:15.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.700103 systemd[1]: Reached target initrd-fs.target. Feb 12 21:55:15.702353 systemd[1]: Reached target initrd.target. Feb 12 21:55:15.704248 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 21:55:15.706905 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 21:55:15.719220 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 21:55:15.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.722062 systemd[1]: Starting initrd-cleanup.service... Feb 12 21:55:15.732027 systemd[1]: Stopped target nss-lookup.target. Feb 12 21:55:15.734165 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 21:55:15.736423 systemd[1]: Stopped target timers.target. Feb 12 21:55:15.738315 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 21:55:15.739574 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 21:55:15.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.741683 systemd[1]: Stopped target initrd.target. Feb 12 21:55:15.743748 systemd[1]: Stopped target basic.target. Feb 12 21:55:15.745611 systemd[1]: Stopped target ignition-complete.target. Feb 12 21:55:15.747889 systemd[1]: Stopped target ignition-diskful.target. Feb 12 21:55:15.750132 systemd[1]: Stopped target initrd-root-device.target. Feb 12 21:55:15.752277 systemd[1]: Stopped target remote-fs.target. Feb 12 21:55:15.754592 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 21:55:15.756806 systemd[1]: Stopped target sysinit.target. Feb 12 21:55:15.758768 systemd[1]: Stopped target local-fs.target. Feb 12 21:55:15.760628 systemd[1]: Stopped target local-fs-pre.target. Feb 12 21:55:15.762592 systemd[1]: Stopped target swap.target. Feb 12 21:55:15.764068 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 21:55:15.764207 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 21:55:15.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.769849 systemd[1]: Stopped target cryptsetup.target. Feb 12 21:55:15.772929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 21:55:15.776721 systemd[1]: Stopped dracut-initqueue.service. Feb 12 21:55:15.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.778675 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 21:55:15.780099 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 21:55:15.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.783014 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 21:55:15.784514 systemd[1]: Stopped ignition-files.service. Feb 12 21:55:15.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.787738 systemd[1]: Stopping ignition-mount.service... Feb 12 21:55:15.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.790200 systemd[1]: Stopping iscsiuio.service... Feb 12 21:55:15.792290 systemd[1]: Stopping sysroot-boot.service... Feb 12 21:55:15.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.793360 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 21:55:15.807814 ignition[1300]: INFO : Ignition 2.14.0 Feb 12 21:55:15.807814 ignition[1300]: INFO : Stage: umount Feb 12 21:55:15.807814 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:55:15.807814 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:55:15.793603 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 21:55:15.795198 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 21:55:15.795402 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 21:55:15.800823 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 21:55:15.821904 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:55:15.821904 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:55:15.800952 systemd[1]: Stopped iscsiuio.service. Feb 12 21:55:15.826448 ignition[1300]: INFO : PUT result: OK Feb 12 21:55:15.803131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 21:55:15.804129 systemd[1]: Finished initrd-cleanup.service. Feb 12 21:55:15.829874 ignition[1300]: INFO : umount: umount passed Feb 12 21:55:15.830737 ignition[1300]: INFO : Ignition finished successfully Feb 12 21:55:15.831752 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 21:55:15.831838 systemd[1]: Stopped ignition-mount.service. Feb 12 21:55:15.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.835320 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 21:55:15.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.835375 systemd[1]: Stopped ignition-disks.service. Feb 12 21:55:15.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.836300 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 21:55:15.836343 systemd[1]: Stopped ignition-kargs.service. Feb 12 21:55:15.837296 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 21:55:15.837352 systemd[1]: Stopped ignition-fetch.service. Feb 12 21:55:15.849122 systemd[1]: Stopped target network.target. Feb 12 21:55:15.854484 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 21:55:15.855610 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 21:55:15.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.857595 systemd[1]: Stopped target paths.target. Feb 12 21:55:15.859659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 21:55:15.863488 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 21:55:15.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.864494 systemd[1]: Stopped target slices.target. Feb 12 21:55:15.865286 systemd[1]: Stopped target sockets.target. Feb 12 21:55:15.866267 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 21:55:15.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.866296 systemd[1]: Closed iscsid.socket. Feb 12 21:55:15.867076 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 21:55:15.867119 systemd[1]: Closed iscsiuio.socket. Feb 12 21:55:15.868046 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 21:55:15.868089 systemd[1]: Stopped ignition-setup.service. Feb 12 21:55:15.869662 systemd[1]: Stopping systemd-networkd.service... Feb 12 21:55:15.870610 systemd[1]: Stopping systemd-resolved.service... Feb 12 21:55:15.874940 systemd-networkd[1106]: eth0: DHCPv6 lease lost Feb 12 21:55:15.875255 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 21:55:15.894000 audit: BPF prog-id=9 op=UNLOAD Feb 12 21:55:15.875865 systemd[1]: Stopped sysroot-boot.service. Feb 12 21:55:15.879350 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 21:55:15.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.879480 systemd[1]: Stopped systemd-networkd.service. Feb 12 21:55:15.882465 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 21:55:15.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.882509 systemd[1]: Closed systemd-networkd.socket. Feb 12 21:55:15.883808 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 21:55:15.883865 systemd[1]: Stopped initrd-setup-root.service. Feb 12 21:55:15.894135 systemd[1]: Stopping network-cleanup.service... Feb 12 21:55:15.896672 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 21:55:15.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.897629 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 21:55:15.899810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:55:15.899871 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:55:15.911628 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 21:55:15.913304 systemd[1]: Stopped systemd-modules-load.service. Feb 12 21:55:15.920639 systemd[1]: Stopping systemd-udevd.service... Feb 12 21:55:15.937522 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 21:55:15.938864 systemd[1]: Stopped systemd-resolved.service. Feb 12 21:55:15.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.941275 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 21:55:15.942405 systemd[1]: Stopped systemd-udevd.service. Feb 12 21:55:15.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.945000 audit: BPF prog-id=6 op=UNLOAD Feb 12 21:55:15.945608 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 21:55:15.945679 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 21:55:15.947566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 21:55:15.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.947622 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 21:55:15.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.952305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 21:55:15.952357 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 21:55:15.953507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 21:55:15.953544 systemd[1]: Stopped dracut-cmdline.service. Feb 12 21:55:15.954584 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 21:55:15.954620 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 21:55:15.957272 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 21:55:15.973633 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 21:55:15.973735 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 21:55:15.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.978716 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 21:55:15.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:15.978773 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 21:55:15.980729 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 21:55:15.980829 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 21:55:15.983309 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 21:55:15.983405 systemd[1]: Stopped network-cleanup.service. Feb 12 21:55:15.983738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 21:55:15.983817 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 21:55:15.984049 systemd[1]: Reached target initrd-switch-root.target. Feb 12 21:55:15.984970 systemd[1]: Starting initrd-switch-root.service... Feb 12 21:55:16.005535 systemd[1]: Switching root. Feb 12 21:55:16.008000 audit: BPF prog-id=8 op=UNLOAD Feb 12 21:55:16.008000 audit: BPF prog-id=7 op=UNLOAD Feb 12 21:55:16.013000 audit: BPF prog-id=5 op=UNLOAD Feb 12 21:55:16.013000 audit: BPF prog-id=4 op=UNLOAD Feb 12 21:55:16.013000 audit: BPF prog-id=3 op=UNLOAD Feb 12 21:55:16.037200 iscsid[1111]: iscsid shutting down. Feb 12 21:55:16.038342 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Feb 12 21:55:16.038758 systemd-journald[185]: Journal stopped Feb 12 21:55:22.051889 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 21:55:22.052024 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 21:55:22.052052 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 21:55:22.052070 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 21:55:22.052097 kernel: SELinux: policy capability open_perms=1 Feb 12 21:55:22.052118 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 21:55:22.052136 kernel: SELinux: policy capability always_check_network=0 Feb 12 21:55:22.052155 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 21:55:22.052172 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 21:55:22.052189 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 21:55:22.052209 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 21:55:22.052404 systemd[1]: Successfully loaded SELinux policy in 121.432ms. Feb 12 21:55:22.052457 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.369ms. Feb 12 21:55:22.052479 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:55:22.052498 systemd[1]: Detected virtualization amazon. Feb 12 21:55:22.052517 systemd[1]: Detected architecture x86-64. Feb 12 21:55:22.052536 systemd[1]: Detected first boot. Feb 12 21:55:22.052555 systemd[1]: Initializing machine ID from VM UUID. Feb 12 21:55:22.052574 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 21:55:22.052595 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 12 21:55:22.052614 kernel: audit: type=1400 audit(1707774917.435:86): avc: denied { associate } for pid=1351 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 21:55:22.052639 kernel: audit: type=1300 audit(1707774917.435:86): arch=c000003e syscall=188 success=yes exit=0 a0=c0001196bc a1=c00002cb40 a2=c00002b440 a3=32 items=0 ppid=1334 pid=1351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:55:22.052698 kernel: audit: type=1327 audit(1707774917.435:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:55:22.052719 kernel: audit: type=1400 audit(1707774917.438:87): avc: denied { associate } for pid=1351 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 21:55:22.052741 kernel: audit: type=1300 audit(1707774917.438:87): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000119795 a2=1ed a3=0 items=2 ppid=1334 pid=1351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:55:22.052759 kernel: audit: type=1307 audit(1707774917.438:87): cwd="/" Feb 12 21:55:22.052779 kernel: audit: type=1302 audit(1707774917.438:87): item=0 name=(null) inode=2 dev=00:28 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:22.052805 kernel: audit: type=1302 audit(1707774917.438:87): item=1 name=(null) inode=3 dev=00:28 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:22.052824 kernel: audit: type=1327 audit(1707774917.438:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:55:22.052842 systemd[1]: Populated /etc with preset unit settings. Feb 12 21:55:22.052860 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:55:22.052882 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:55:22.052903 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:55:22.065388 systemd[1]: Queued start job for default target multi-user.target. Feb 12 21:55:22.065466 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 21:55:22.065486 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 21:55:22.065511 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 21:55:22.065536 systemd[1]: Created slice system-getty.slice. Feb 12 21:55:22.065555 systemd[1]: Created slice system-modprobe.slice. Feb 12 21:55:22.065574 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 21:55:22.075422 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 21:55:22.075474 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 21:55:22.075494 systemd[1]: Created slice user.slice. Feb 12 21:55:22.075514 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:55:22.075533 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 21:55:22.075556 systemd[1]: Set up automount boot.automount. Feb 12 21:55:22.075573 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 21:55:22.075593 systemd[1]: Reached target integritysetup.target. Feb 12 21:55:22.075611 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:55:22.075631 systemd[1]: Reached target remote-fs.target. Feb 12 21:55:22.075650 systemd[1]: Reached target slices.target. Feb 12 21:55:22.076640 systemd[1]: Reached target swap.target. Feb 12 21:55:22.076662 systemd[1]: Reached target torcx.target. Feb 12 21:55:22.076682 systemd[1]: Reached target veritysetup.target. Feb 12 21:55:22.076706 systemd[1]: Listening on systemd-coredump.socket. Feb 12 21:55:22.076725 systemd[1]: Listening on systemd-initctl.socket. Feb 12 21:55:22.076745 kernel: audit: type=1400 audit(1707774921.791:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 21:55:22.076766 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:55:22.076785 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:55:22.076803 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:55:22.076822 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:55:22.076841 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:55:22.076860 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:55:22.076880 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 21:55:22.076901 systemd[1]: Mounting dev-hugepages.mount... Feb 12 21:55:22.077033 systemd[1]: Mounting dev-mqueue.mount... Feb 12 21:55:22.077058 systemd[1]: Mounting media.mount... Feb 12 21:55:22.077078 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:55:22.077100 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 21:55:22.077118 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 21:55:22.077137 systemd[1]: Mounting tmp.mount... Feb 12 21:55:22.077157 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 21:55:22.077176 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 21:55:22.077195 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:55:22.077214 systemd[1]: Starting modprobe@configfs.service... Feb 12 21:55:22.077233 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 21:55:22.077252 systemd[1]: Starting modprobe@drm.service... Feb 12 21:55:22.077274 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 21:55:22.077293 systemd[1]: Starting modprobe@fuse.service... Feb 12 21:55:22.077319 systemd[1]: Starting modprobe@loop.service... Feb 12 21:55:22.077345 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 21:55:22.077365 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 21:55:22.077386 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 21:55:22.077404 systemd[1]: Starting systemd-journald.service... Feb 12 21:55:22.077423 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:55:22.077454 systemd[1]: Starting systemd-network-generator.service... Feb 12 21:55:22.077476 systemd[1]: Starting systemd-remount-fs.service... Feb 12 21:55:22.077496 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:55:22.077515 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:55:22.077534 systemd[1]: Mounted dev-hugepages.mount. Feb 12 21:55:22.077553 systemd[1]: Mounted dev-mqueue.mount. Feb 12 21:55:22.077573 systemd[1]: Mounted media.mount. Feb 12 21:55:22.077592 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 21:55:22.077610 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 21:55:22.077630 systemd[1]: Mounted tmp.mount. Feb 12 21:55:22.077652 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:55:22.077671 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 21:55:22.077691 systemd[1]: Finished modprobe@configfs.service. Feb 12 21:55:22.077710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 21:55:22.077729 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 21:55:22.077747 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 21:55:22.077767 systemd[1]: Finished modprobe@drm.service. Feb 12 21:55:22.077786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 21:55:22.077806 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 21:55:22.077828 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:55:22.077847 systemd[1]: Finished systemd-network-generator.service. Feb 12 21:55:22.077865 systemd[1]: Finished systemd-remount-fs.service. Feb 12 21:55:22.077887 systemd[1]: Reached target network-pre.target. Feb 12 21:55:22.077906 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 21:55:22.077925 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 21:55:22.077950 systemd-journald[1448]: Journal started Feb 12 21:55:22.078030 systemd-journald[1448]: Runtime Journal (/run/log/journal/ec21fa27ccfb26117fee64d8fab30023) is 4.8M, max 38.7M, 33.9M free. Feb 12 21:55:21.791000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 21:55:21.791000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 21:55:22.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.045000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 21:55:22.045000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffef02a66d0 a2=4000 a3=7ffef02a676c items=0 ppid=1 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:55:22.045000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 21:55:22.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.088175 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 21:55:22.093453 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 21:55:22.098988 systemd[1]: Starting systemd-random-seed.service... Feb 12 21:55:22.104317 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:55:22.108996 systemd[1]: Started systemd-journald.service. Feb 12 21:55:22.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.112909 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 21:55:22.116568 systemd[1]: Starting systemd-journal-flush.service... Feb 12 21:55:22.135456 kernel: loop: module loaded Feb 12 21:55:22.136278 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 21:55:22.136589 systemd[1]: Finished modprobe@loop.service. Feb 12 21:55:22.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.137983 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 21:55:22.140370 systemd[1]: Finished systemd-random-seed.service. Feb 12 21:55:22.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.142130 systemd[1]: Reached target first-boot-complete.target. Feb 12 21:55:22.150083 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:55:22.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.160453 kernel: fuse: init (API version 7.34) Feb 12 21:55:22.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.164334 systemd-journald[1448]: Time spent on flushing to /var/log/journal/ec21fa27ccfb26117fee64d8fab30023 is 82.657ms for 1139 entries. Feb 12 21:55:22.164334 systemd-journald[1448]: System Journal (/var/log/journal/ec21fa27ccfb26117fee64d8fab30023) is 8.0M, max 195.6M, 187.6M free. Feb 12 21:55:22.268330 systemd-journald[1448]: Received client request to flush runtime journal. Feb 12 21:55:22.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.161208 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 21:55:22.161616 systemd[1]: Finished modprobe@fuse.service. Feb 12 21:55:22.167817 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 21:55:22.176534 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 21:55:22.220131 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 21:55:22.223893 systemd[1]: Starting systemd-sysusers.service... Feb 12 21:55:22.259173 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:55:22.262327 systemd[1]: Starting systemd-udev-settle.service... Feb 12 21:55:22.269967 systemd[1]: Finished systemd-journal-flush.service. Feb 12 21:55:22.280580 udevadm[1499]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 21:55:22.320307 systemd[1]: Finished systemd-sysusers.service. Feb 12 21:55:22.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:22.323611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:55:22.384595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:55:22.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.106664 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 21:55:23.120008 kernel: kauditd_printk_skb: 28 callbacks suppressed Feb 12 21:55:23.120216 kernel: audit: type=1130 audit(1707774923.107:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.110220 systemd[1]: Starting systemd-udevd.service... Feb 12 21:55:23.148504 systemd-udevd[1507]: Using default interface naming scheme 'v252'. Feb 12 21:55:23.226265 systemd[1]: Started systemd-udevd.service. Feb 12 21:55:23.238938 kernel: audit: type=1130 audit(1707774923.227:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.236760 systemd[1]: Starting systemd-networkd.service... Feb 12 21:55:23.264747 systemd[1]: Starting systemd-userdbd.service... Feb 12 21:55:23.333892 systemd[1]: Found device dev-ttyS0.device. Feb 12 21:55:23.379712 (udev-worker)[1515]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:55:23.392285 systemd[1]: Started systemd-userdbd.service. Feb 12 21:55:23.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.399613 kernel: audit: type=1130 audit(1707774923.393:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.449465 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 21:55:23.456948 kernel: ACPI: button: Power Button [PWRF] Feb 12 21:55:23.457038 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 12 21:55:23.469757 kernel: ACPI: button: Sleep Button [SLPF] Feb 12 21:55:23.445000 audit[1508]: AVC avc: denied { confidentiality } for pid=1508 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:55:23.484578 kernel: audit: type=1400 audit(1707774923.445:118): avc: denied { confidentiality } for pid=1508 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:55:23.445000 audit[1508]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55daa09f1c40 a1=32194 a2=7f67b2537bc5 a3=5 items=108 ppid=1507 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:55:23.499497 kernel: audit: type=1300 audit(1707774923.445:118): arch=c000003e syscall=175 success=yes exit=0 a0=55daa09f1c40 a1=32194 a2=7f67b2537bc5 a3=5 items=108 ppid=1507 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:55:23.508288 kernel: audit: type=1307 audit(1707774923.445:118): cwd="/" Feb 12 21:55:23.508375 kernel: audit: type=1302 audit(1707774923.445:118): item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: CWD cwd="/" Feb 12 21:55:23.445000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=1 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.515468 kernel: audit: type=1302 audit(1707774923.445:118): item=1 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=2 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.522489 kernel: audit: type=1302 audit(1707774923.445:118): item=2 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=3 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.529449 kernel: audit: type=1302 audit(1707774923.445:118): item=3 name=(null) inode=14250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=4 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=5 name=(null) inode=14251 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=6 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=7 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=8 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=9 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=10 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=11 name=(null) inode=14254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=12 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=13 name=(null) inode=14255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=14 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=15 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=16 name=(null) inode=14252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=17 name=(null) inode=14257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=18 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=19 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=20 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=21 name=(null) inode=14259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=22 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=23 name=(null) inode=14260 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=24 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=25 name=(null) inode=14261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=26 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=27 name=(null) inode=14262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=28 name=(null) inode=14258 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=29 name=(null) inode=14263 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=30 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=31 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=32 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=33 name=(null) inode=14265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=34 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=35 name=(null) inode=14266 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=36 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=37 name=(null) inode=14267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=38 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=39 name=(null) inode=14268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=40 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=41 name=(null) inode=14269 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=42 name=(null) inode=14249 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=43 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=44 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=45 name=(null) inode=14271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=46 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=47 name=(null) inode=14272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=48 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=49 name=(null) inode=14273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=50 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=51 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=52 name=(null) inode=14270 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=53 name=(null) inode=14275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=55 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=56 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=57 name=(null) inode=14277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=58 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=59 name=(null) inode=14278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=60 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=61 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=62 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=63 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=64 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=65 name=(null) inode=14281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=66 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=67 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=68 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=69 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=70 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=71 name=(null) inode=14284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=72 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=73 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=74 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=75 name=(null) inode=14286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=76 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=77 name=(null) inode=14287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=78 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=79 name=(null) inode=14288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=80 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=81 name=(null) inode=14289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=82 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=83 name=(null) inode=14290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=84 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=85 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=86 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=87 name=(null) inode=14292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=88 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=89 name=(null) inode=14293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=90 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=91 name=(null) inode=14294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=92 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=93 name=(null) inode=14295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=94 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=95 name=(null) inode=14296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=96 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=97 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=98 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=99 name=(null) inode=14298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=100 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=101 name=(null) inode=14299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=102 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=103 name=(null) inode=14300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=104 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=105 name=(null) inode=14301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=106 name=(null) inode=14297 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PATH item=107 name=(null) inode=14302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:55:23.445000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 21:55:23.557454 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 12 21:55:23.583473 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 12 21:55:23.599646 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 21:55:23.617454 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1519) Feb 12 21:55:23.634363 systemd-networkd[1513]: lo: Link UP Feb 12 21:55:23.634378 systemd-networkd[1513]: lo: Gained carrier Feb 12 21:55:23.634963 systemd-networkd[1513]: Enumeration completed Feb 12 21:55:23.635134 systemd[1]: Started systemd-networkd.service. Feb 12 21:55:23.635261 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 21:55:23.638237 systemd-networkd[1513]: eth0: Link UP Feb 12 21:55:23.638410 systemd-networkd[1513]: eth0: Gained carrier Feb 12 21:55:23.638499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:55:23.645661 systemd-networkd[1513]: eth0: DHCPv4 address 172.31.23.213/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 21:55:23.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.853090 systemd[1]: Finished systemd-udev-settle.service. Feb 12 21:55:23.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.886561 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 21:55:23.888031 systemd[1]: Starting lvm2-activation-early.service... Feb 12 21:55:23.891308 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 21:55:23.925687 lvm[1621]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:55:23.964031 systemd[1]: Finished lvm2-activation-early.service. Feb 12 21:55:23.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:23.965708 systemd[1]: Reached target cryptsetup.target. Feb 12 21:55:23.977068 systemd[1]: Starting lvm2-activation.service... Feb 12 21:55:23.991595 lvm[1624]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:55:24.023258 systemd[1]: Finished lvm2-activation.service. Feb 12 21:55:24.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.025460 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:55:24.029010 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 21:55:24.029050 systemd[1]: Reached target local-fs.target. Feb 12 21:55:24.031714 systemd[1]: Reached target machines.target. Feb 12 21:55:24.035776 systemd[1]: Starting ldconfig.service... Feb 12 21:55:24.037448 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 21:55:24.037514 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:55:24.038911 systemd[1]: Starting systemd-boot-update.service... Feb 12 21:55:24.041410 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 21:55:24.044918 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 21:55:24.047488 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:55:24.047582 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:55:24.049657 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 21:55:24.062131 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1627 (bootctl) Feb 12 21:55:24.064350 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 21:55:24.084038 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 21:55:24.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.094903 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 21:55:24.098109 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 21:55:24.100943 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 21:55:24.196714 systemd-fsck[1636]: fsck.fat 4.2 (2021-01-31) Feb 12 21:55:24.196714 systemd-fsck[1636]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 12 21:55:24.203307 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 21:55:24.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.208006 systemd[1]: Mounting boot.mount... Feb 12 21:55:24.245564 systemd[1]: Mounted boot.mount. Feb 12 21:55:24.282824 systemd[1]: Finished systemd-boot-update.service. Feb 12 21:55:24.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.423903 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 21:55:24.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.427377 systemd[1]: Starting audit-rules.service... Feb 12 21:55:24.431931 systemd[1]: Starting clean-ca-certificates.service... Feb 12 21:55:24.435689 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 21:55:24.440135 systemd[1]: Starting systemd-resolved.service... Feb 12 21:55:24.444625 systemd[1]: Starting systemd-timesyncd.service... Feb 12 21:55:24.453038 systemd[1]: Starting systemd-update-utmp.service... Feb 12 21:55:24.455725 systemd[1]: Finished clean-ca-certificates.service. Feb 12 21:55:24.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.463154 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 21:55:24.500000 audit[1661]: SYSTEM_BOOT pid=1661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.503075 systemd[1]: Finished systemd-update-utmp.service. Feb 12 21:55:24.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:55:24.545988 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 21:55:24.591000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 21:55:24.591000 audit[1678]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc229c3f80 a2=420 a3=0 items=0 ppid=1655 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:55:24.591000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 21:55:24.592204 augenrules[1678]: No rules Feb 12 21:55:24.594063 systemd[1]: Finished audit-rules.service. Feb 12 21:55:24.655474 systemd-resolved[1659]: Positive Trust Anchors: Feb 12 21:55:24.655498 systemd-resolved[1659]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:55:24.655539 systemd-resolved[1659]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:55:24.661494 systemd[1]: Started systemd-timesyncd.service. Feb 12 21:55:24.662678 systemd[1]: Reached target time-set.target. Feb 12 21:55:24.697350 systemd-resolved[1659]: Defaulting to hostname 'linux'. Feb 12 21:55:24.699224 systemd[1]: Started systemd-resolved.service. Feb 12 21:55:24.700451 systemd[1]: Reached target network.target. Feb 12 21:55:24.701690 systemd[1]: Reached target nss-lookup.target. Feb 12 21:55:24.748582 systemd-timesyncd[1660]: Contacted time server 104.167.241.253:123 (0.flatcar.pool.ntp.org). Feb 12 21:55:24.748663 systemd-timesyncd[1660]: Initial clock synchronization to Mon 2024-02-12 21:55:24.898330 UTC. Feb 12 21:55:25.035497 ldconfig[1626]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 21:55:25.041783 systemd[1]: Finished ldconfig.service. Feb 12 21:55:25.045236 systemd[1]: Starting systemd-update-done.service... Feb 12 21:55:25.065748 systemd[1]: Finished systemd-update-done.service. Feb 12 21:55:25.071199 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 21:55:25.072403 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 21:55:25.074545 systemd[1]: Reached target sysinit.target. Feb 12 21:55:25.075925 systemd[1]: Started motdgen.path. Feb 12 21:55:25.077012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 21:55:25.078735 systemd[1]: Started logrotate.timer. Feb 12 21:55:25.079823 systemd[1]: Started mdadm.timer. Feb 12 21:55:25.080691 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 21:55:25.081774 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 21:55:25.081802 systemd[1]: Reached target paths.target. Feb 12 21:55:25.082980 systemd[1]: Reached target timers.target. Feb 12 21:55:25.084438 systemd[1]: Listening on dbus.socket. Feb 12 21:55:25.087217 systemd[1]: Starting docker.socket... Feb 12 21:55:25.089856 systemd[1]: Listening on sshd.socket. Feb 12 21:55:25.091203 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:55:25.091752 systemd[1]: Listening on docker.socket. Feb 12 21:55:25.092787 systemd[1]: Reached target sockets.target. Feb 12 21:55:25.093866 systemd[1]: Reached target basic.target. Feb 12 21:55:25.095239 systemd[1]: System is tainted: cgroupsv1 Feb 12 21:55:25.095423 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:55:25.095683 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:55:25.097430 systemd[1]: Starting containerd.service... Feb 12 21:55:25.099837 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 21:55:25.102857 systemd[1]: Starting dbus.service... Feb 12 21:55:25.105436 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 21:55:25.111412 systemd[1]: Starting extend-filesystems.service... Feb 12 21:55:25.114651 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 21:55:25.117146 systemd[1]: Starting motdgen.service... Feb 12 21:55:25.122493 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 21:55:25.129809 systemd[1]: Starting prepare-critools.service... Feb 12 21:55:25.186853 jq[1695]: false Feb 12 21:55:25.137162 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 21:55:25.151371 systemd[1]: Starting sshd-keygen.service... Feb 12 21:55:25.160907 systemd[1]: Starting systemd-logind.service... Feb 12 21:55:25.162820 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:55:25.162958 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 21:55:25.170834 systemd[1]: Starting update-engine.service... Feb 12 21:55:25.176994 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 21:55:25.181659 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 21:55:25.182132 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 21:55:25.215140 jq[1710]: true Feb 12 21:55:25.220424 tar[1712]: ./ Feb 12 21:55:25.220424 tar[1712]: ./macvlan Feb 12 21:55:25.245189 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 21:55:25.245545 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 21:55:25.257815 tar[1713]: crictl Feb 12 21:55:25.258596 jq[1721]: true Feb 12 21:55:25.297981 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 21:55:25.298392 systemd[1]: Finished motdgen.service. Feb 12 21:55:25.318680 extend-filesystems[1696]: Found nvme0n1 Feb 12 21:55:25.331714 extend-filesystems[1696]: Found nvme0n1p1 Feb 12 21:55:25.334367 dbus-daemon[1694]: [system] SELinux support is enabled Feb 12 21:55:25.334731 systemd[1]: Started dbus.service. Feb 12 21:55:25.338994 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 21:55:25.339026 systemd[1]: Reached target system-config.target. Feb 12 21:55:25.340592 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 21:55:25.340623 systemd[1]: Reached target user-config.target. Feb 12 21:55:25.342755 dbus-daemon[1694]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1513 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 21:55:25.345180 extend-filesystems[1696]: Found nvme0n1p2 Feb 12 21:55:25.346514 extend-filesystems[1696]: Found nvme0n1p3 Feb 12 21:55:25.346514 extend-filesystems[1696]: Found usr Feb 12 21:55:25.346514 extend-filesystems[1696]: Found nvme0n1p4 Feb 12 21:55:25.350020 extend-filesystems[1696]: Found nvme0n1p6 Feb 12 21:55:25.350020 extend-filesystems[1696]: Found nvme0n1p7 Feb 12 21:55:25.350020 extend-filesystems[1696]: Found nvme0n1p9 Feb 12 21:55:25.350020 extend-filesystems[1696]: Checking size of /dev/nvme0n1p9 Feb 12 21:55:25.370392 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 21:55:25.377054 systemd[1]: Starting systemd-hostnamed.service... Feb 12 21:55:25.391333 extend-filesystems[1696]: Resized partition /dev/nvme0n1p9 Feb 12 21:55:25.443809 extend-filesystems[1758]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 21:55:25.449590 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:55:25.451408 update_engine[1707]: I0212 21:55:25.449494 1707 main.cc:92] Flatcar Update Engine starting Feb 12 21:55:25.455826 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 21:55:25.456973 update_engine[1707]: I0212 21:55:25.456660 1707 update_check_scheduler.cc:74] Next update check in 10m50s Feb 12 21:55:25.457389 systemd[1]: Started update-engine.service. Feb 12 21:55:25.462148 systemd[1]: Started locksmithd.service. Feb 12 21:55:25.503477 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 21:55:25.522215 env[1715]: time="2024-02-12T21:55:25.520748300Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 21:55:25.580166 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 21:55:25.619002 extend-filesystems[1758]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 21:55:25.619002 extend-filesystems[1758]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 21:55:25.619002 extend-filesystems[1758]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 21:55:25.627975 extend-filesystems[1696]: Resized filesystem in /dev/nvme0n1p9 Feb 12 21:55:25.621667 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 21:55:25.621995 systemd[1]: Finished extend-filesystems.service. Feb 12 21:55:25.649081 tar[1712]: ./static Feb 12 21:55:25.666660 systemd-networkd[1513]: eth0: Gained IPv6LL Feb 12 21:55:25.687903 systemd-logind[1706]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 21:55:25.687936 systemd-logind[1706]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 12 21:55:25.687961 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 21:55:25.690635 systemd-logind[1706]: New seat seat0. Feb 12 21:55:25.744514 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 21:55:25.748179 systemd[1]: Started systemd-logind.service. Feb 12 21:55:25.750825 systemd[1]: Created slice system-sshd.slice. Feb 12 21:55:25.752192 systemd[1]: Reached target network-online.target. Feb 12 21:55:25.755697 systemd[1]: Started amazon-ssm-agent.service. Feb 12 21:55:25.758871 systemd[1]: Started nvidia.service. Feb 12 21:55:25.764534 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 21:55:25.764904 systemd[1]: Started systemd-hostnamed.service. Feb 12 21:55:25.767224 dbus-daemon[1694]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1754 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 21:55:25.771269 systemd[1]: Starting polkit.service... Feb 12 21:55:25.805121 polkitd[1786]: Started polkitd version 121 Feb 12 21:55:25.915616 polkitd[1786]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 21:55:25.915965 polkitd[1786]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 21:55:25.936481 env[1715]: time="2024-02-12T21:55:25.936386144Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 21:55:25.942198 env[1715]: time="2024-02-12T21:55:25.942153164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:55:25.943684 polkitd[1786]: Finished loading, compiling and executing 2 rules Feb 12 21:55:25.944714 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 21:55:25.944992 systemd[1]: Started polkit.service. Feb 12 21:55:25.947526 polkitd[1786]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 21:55:25.953871 env[1715]: time="2024-02-12T21:55:25.953804832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:55:25.957867 tar[1712]: ./vlan Feb 12 21:55:25.958558 env[1715]: time="2024-02-12T21:55:25.958509592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:55:25.959156 env[1715]: time="2024-02-12T21:55:25.959116624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:55:25.962444 env[1715]: time="2024-02-12T21:55:25.962374586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 21:55:25.962463 systemd-hostnamed[1754]: Hostname set to (transient) Feb 12 21:55:25.963037 env[1715]: time="2024-02-12T21:55:25.962979167Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 21:55:25.963150 env[1715]: time="2024-02-12T21:55:25.963132649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 21:55:25.963389 env[1715]: time="2024-02-12T21:55:25.963355487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:55:25.963963 env[1715]: time="2024-02-12T21:55:25.963943196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:55:25.964385 env[1715]: time="2024-02-12T21:55:25.964360032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:55:25.964497 systemd-resolved[1659]: System hostname changed to 'ip-172-31-23-213'. Feb 12 21:55:25.967350 env[1715]: time="2024-02-12T21:55:25.967324524Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 21:55:25.967588 env[1715]: time="2024-02-12T21:55:25.967567195Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 21:55:25.968508 env[1715]: time="2024-02-12T21:55:25.968485271Z" level=info msg="metadata content store policy set" policy=shared Feb 12 21:55:25.995573 env[1715]: time="2024-02-12T21:55:25.995518142Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995586749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995612999Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995652729Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995675983Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995694897Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995713822Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995732666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.995751 env[1715]: time="2024-02-12T21:55:25.995750112Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.996143 env[1715]: time="2024-02-12T21:55:25.995770299Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.996143 env[1715]: time="2024-02-12T21:55:25.995791186Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.996143 env[1715]: time="2024-02-12T21:55:25.995811600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 21:55:25.996143 env[1715]: time="2024-02-12T21:55:25.995975892Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 21:55:25.996303 env[1715]: time="2024-02-12T21:55:25.996178446Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 21:55:25.996848 env[1715]: time="2024-02-12T21:55:25.996668715Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 21:55:25.996927 env[1715]: time="2024-02-12T21:55:25.996859571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.996927 env[1715]: time="2024-02-12T21:55:25.996889419Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 21:55:25.997050 env[1715]: time="2024-02-12T21:55:25.996956468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997050 env[1715]: time="2024-02-12T21:55:25.996978057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997050 env[1715]: time="2024-02-12T21:55:25.996999390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997234 env[1715]: time="2024-02-12T21:55:25.997050487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997234 env[1715]: time="2024-02-12T21:55:25.997070338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997234 env[1715]: time="2024-02-12T21:55:25.997090158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997234 env[1715]: time="2024-02-12T21:55:25.997108523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997234 env[1715]: time="2024-02-12T21:55:25.997126586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997234 env[1715]: time="2024-02-12T21:55:25.997151494Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 21:55:25.997495 env[1715]: time="2024-02-12T21:55:25.997357547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997495 env[1715]: time="2024-02-12T21:55:25.997383516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997495 env[1715]: time="2024-02-12T21:55:25.997403547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997495 env[1715]: time="2024-02-12T21:55:25.997421764Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 21:55:25.997495 env[1715]: time="2024-02-12T21:55:25.997444505Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 21:55:25.997495 env[1715]: time="2024-02-12T21:55:25.997476138Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 21:55:25.997716 env[1715]: time="2024-02-12T21:55:25.997502761Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 21:55:25.997716 env[1715]: time="2024-02-12T21:55:25.997548564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 21:55:25.997942 env[1715]: time="2024-02-12T21:55:25.997842465Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 21:55:25.997942 env[1715]: time="2024-02-12T21:55:25.997922662Z" level=info msg="Connect containerd service" Feb 12 21:55:26.000372 env[1715]: time="2024-02-12T21:55:25.997972913Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001488728Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001744111Z" level=info msg="Start subscribing containerd event" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001800111Z" level=info msg="Start recovering state" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001874609Z" level=info msg="Start event monitor" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001888768Z" level=info msg="Start snapshots syncer" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001901490Z" level=info msg="Start cni network conf syncer for default" Feb 12 21:55:26.002168 env[1715]: time="2024-02-12T21:55:26.001912893Z" level=info msg="Start streaming server" Feb 12 21:55:26.002494 env[1715]: time="2024-02-12T21:55:26.002181138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 21:55:26.002494 env[1715]: time="2024-02-12T21:55:26.002240877Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 21:55:26.002425 systemd[1]: Started containerd.service. Feb 12 21:55:26.047402 amazon-ssm-agent[1781]: 2024/02/12 21:55:26 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 21:55:26.050591 amazon-ssm-agent[1781]: Initializing new seelog logger Feb 12 21:55:26.055301 amazon-ssm-agent[1781]: New Seelog Logger Creation Complete Feb 12 21:55:26.057576 amazon-ssm-agent[1781]: 2024/02/12 21:55:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 21:55:26.057683 amazon-ssm-agent[1781]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 21:55:26.057977 amazon-ssm-agent[1781]: 2024/02/12 21:55:26 processing appconfig overrides Feb 12 21:55:26.100223 env[1715]: time="2024-02-12T21:55:26.100171923Z" level=info msg="containerd successfully booted in 0.612971s" Feb 12 21:55:26.209108 tar[1712]: ./portmap Feb 12 21:55:26.264995 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 21:55:26.352194 coreos-metadata[1692]: Feb 12 21:55:26.352 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 21:55:26.361983 coreos-metadata[1692]: Feb 12 21:55:26.361 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 21:55:26.364123 coreos-metadata[1692]: Feb 12 21:55:26.364 INFO Fetch successful Feb 12 21:55:26.364376 coreos-metadata[1692]: Feb 12 21:55:26.364 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 21:55:26.365065 coreos-metadata[1692]: Feb 12 21:55:26.364 INFO Fetch successful Feb 12 21:55:26.367583 unknown[1692]: wrote ssh authorized keys file for user: core Feb 12 21:55:26.375624 tar[1712]: ./host-local Feb 12 21:55:26.393814 update-ssh-keys[1885]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:55:26.394561 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 21:55:26.527323 tar[1712]: ./vrf Feb 12 21:55:26.546051 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Create new startup processor Feb 12 21:55:26.548179 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 21:55:26.548344 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing bookkeeping folders Feb 12 21:55:26.548450 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO removing the completed state files Feb 12 21:55:26.548551 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing bookkeeping folders for long running plugins Feb 12 21:55:26.548630 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 21:55:26.548706 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing healthcheck folders for long running plugins Feb 12 21:55:26.548778 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing locations for inventory plugin Feb 12 21:55:26.548865 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing default location for custom inventory Feb 12 21:55:26.548942 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing default location for file inventory Feb 12 21:55:26.549017 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Initializing default location for role inventory Feb 12 21:55:26.549094 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Init the cloudwatchlogs publisher Feb 12 21:55:26.549248 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 21:55:26.549347 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 21:55:26.549426 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 21:55:26.549512 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:configurePackage Feb 12 21:55:26.549590 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 21:55:26.549667 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:configureDocker Feb 12 21:55:26.549751 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:downloadContent Feb 12 21:55:26.549829 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:runDocument Feb 12 21:55:26.549905 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 21:55:26.549980 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 21:55:26.550058 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 21:55:26.550135 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO OS: linux, Arch: amd64 Feb 12 21:55:26.551473 amazon-ssm-agent[1781]: datastore file /var/lib/amazon/ssm/i-04cef99d14d904516/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 21:55:26.628528 tar[1712]: ./bridge Feb 12 21:55:26.645048 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 21:55:26.739742 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 21:55:26.746274 tar[1712]: ./tuning Feb 12 21:55:26.834101 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 21:55:26.838366 tar[1712]: ./firewall Feb 12 21:55:26.921008 systemd[1]: Finished prepare-critools.service. Feb 12 21:55:26.928793 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] Starting message polling Feb 12 21:55:26.941006 tar[1712]: ./host-device Feb 12 21:55:27.001618 tar[1712]: ./sbr Feb 12 21:55:27.023365 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 21:55:27.050822 tar[1712]: ./loopback Feb 12 21:55:27.091254 tar[1712]: ./dhcp Feb 12 21:55:27.118294 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [instanceID=i-04cef99d14d904516] Starting association polling Feb 12 21:55:27.207695 tar[1712]: ./ptp Feb 12 21:55:27.213390 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 21:55:27.262170 tar[1712]: ./ipvlan Feb 12 21:55:27.308666 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 21:55:27.313399 tar[1712]: ./bandwidth Feb 12 21:55:27.386375 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 21:55:27.404457 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 21:55:27.464355 locksmithd[1767]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 21:55:27.499939 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 21:55:27.596449 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 21:55:27.659505 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 21:55:27.692606 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 21:55:27.692957 systemd[1]: Finished sshd-keygen.service. Feb 12 21:55:27.697945 systemd[1]: Starting issuegen.service... Feb 12 21:55:27.702137 systemd[1]: Started sshd@0-172.31.23.213:22-139.178.89.65:49460.service. Feb 12 21:55:27.712890 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 21:55:27.713276 systemd[1]: Finished issuegen.service. Feb 12 21:55:27.716505 systemd[1]: Starting systemd-user-sessions.service... Feb 12 21:55:27.733517 systemd[1]: Finished systemd-user-sessions.service. Feb 12 21:55:27.737624 systemd[1]: Started getty@tty1.service. Feb 12 21:55:27.742112 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 21:55:27.743525 systemd[1]: Reached target getty.target. Feb 12 21:55:27.744627 systemd[1]: Reached target multi-user.target. Feb 12 21:55:27.747853 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 21:55:27.759307 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 21:55:27.760003 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 21:55:27.764518 systemd[1]: Startup finished in 9.872s (kernel) + 10.924s (userspace) = 20.797s. Feb 12 21:55:27.789628 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 21:55:27.886147 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 21:55:27.899303 sshd[1921]: Accepted publickey for core from 139.178.89.65 port 49460 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:27.902574 sshd[1921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:27.920957 systemd-logind[1706]: New session 1 of user core. Feb 12 21:55:27.924933 systemd[1]: Created slice user-500.slice. Feb 12 21:55:27.927510 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 21:55:27.945697 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 21:55:27.948037 systemd[1]: Starting user@500.service... Feb 12 21:55:27.957338 (systemd)[1935]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:27.982854 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-04cef99d14d904516, requestId: 6569eeaf-49e8-4488-a8d2-73a962aa8b15 Feb 12 21:55:28.049819 systemd[1935]: Queued start job for default target default.target. Feb 12 21:55:28.050140 systemd[1935]: Reached target paths.target. Feb 12 21:55:28.050163 systemd[1935]: Reached target sockets.target. Feb 12 21:55:28.050183 systemd[1935]: Reached target timers.target. Feb 12 21:55:28.050201 systemd[1935]: Reached target basic.target. Feb 12 21:55:28.050355 systemd[1]: Started user@500.service. Feb 12 21:55:28.051689 systemd[1]: Started session-1.scope. Feb 12 21:55:28.053023 systemd[1935]: Reached target default.target. Feb 12 21:55:28.053932 systemd[1935]: Startup finished in 87ms. Feb 12 21:55:28.079743 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [OfflineService] Starting document processing engine... Feb 12 21:55:28.176976 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [OfflineService] [EngineProcessor] Starting Feb 12 21:55:28.201031 systemd[1]: Started sshd@1-172.31.23.213:22-139.178.89.65:54676.service. Feb 12 21:55:28.275309 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 21:55:28.372885 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [OfflineService] Starting message polling Feb 12 21:55:28.387160 sshd[1944]: Accepted publickey for core from 139.178.89.65 port 54676 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:28.388617 sshd[1944]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:28.392898 systemd-logind[1706]: New session 2 of user core. Feb 12 21:55:28.394182 systemd[1]: Started session-2.scope. Feb 12 21:55:28.471066 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [OfflineService] Starting send replies to MDS Feb 12 21:55:28.521268 sshd[1944]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:28.524534 systemd[1]: sshd@1-172.31.23.213:22-139.178.89.65:54676.service: Deactivated successfully. Feb 12 21:55:28.525883 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 21:55:28.526717 systemd-logind[1706]: Session 2 logged out. Waiting for processes to exit. Feb 12 21:55:28.528236 systemd-logind[1706]: Removed session 2. Feb 12 21:55:28.546245 systemd[1]: Started sshd@2-172.31.23.213:22-139.178.89.65:54682.service. Feb 12 21:55:28.568851 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 21:55:28.667081 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 21:55:28.709753 sshd[1951]: Accepted publickey for core from 139.178.89.65 port 54682 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:28.711308 sshd[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:28.716621 systemd[1]: Started session-3.scope. Feb 12 21:55:28.717567 systemd-logind[1706]: New session 3 of user core. Feb 12 21:55:28.767554 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 21:55:28.840266 sshd[1951]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:28.843834 systemd[1]: sshd@2-172.31.23.213:22-139.178.89.65:54682.service: Deactivated successfully. Feb 12 21:55:28.845134 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 21:55:28.847207 systemd-logind[1706]: Session 3 logged out. Waiting for processes to exit. Feb 12 21:55:28.849275 systemd-logind[1706]: Removed session 3. Feb 12 21:55:28.864610 systemd[1]: Started sshd@3-172.31.23.213:22-139.178.89.65:54694.service. Feb 12 21:55:28.865396 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 21:55:28.964562 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] listening reply. Feb 12 21:55:29.035018 sshd[1958]: Accepted publickey for core from 139.178.89.65 port 54694 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:29.037123 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:29.042804 systemd[1]: Started session-4.scope. Feb 12 21:55:29.043115 systemd-logind[1706]: New session 4 of user core. Feb 12 21:55:29.063355 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [StartupProcessor] Executing startup processor tasks Feb 12 21:55:29.162480 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 21:55:29.170466 sshd[1958]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:29.173419 systemd[1]: sshd@3-172.31.23.213:22-139.178.89.65:54694.service: Deactivated successfully. Feb 12 21:55:29.174745 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Feb 12 21:55:29.174845 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 21:55:29.176320 systemd-logind[1706]: Removed session 4. Feb 12 21:55:29.194559 systemd[1]: Started sshd@4-172.31.23.213:22-139.178.89.65:54708.service. Feb 12 21:55:29.261829 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 21:55:29.356830 sshd[1965]: Accepted publickey for core from 139.178.89.65 port 54708 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:29.357900 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:29.361298 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 21:55:29.363411 systemd[1]: Started session-5.scope. Feb 12 21:55:29.364070 systemd-logind[1706]: New session 5 of user core. Feb 12 21:55:29.461192 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-04cef99d14d904516?role=subscribe&stream=input Feb 12 21:55:29.481959 sudo[1969]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 21:55:29.482253 sudo[1969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 21:55:29.561041 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-04cef99d14d904516?role=subscribe&stream=input Feb 12 21:55:29.661721 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 21:55:29.761850 amazon-ssm-agent[1781]: 2024-02-12 21:55:26 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 21:55:30.076191 systemd[1]: Reloading. Feb 12 21:55:30.212686 /usr/lib/systemd/system-generators/torcx-generator[1999]: time="2024-02-12T21:55:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:55:30.216609 /usr/lib/systemd/system-generators/torcx-generator[1999]: time="2024-02-12T21:55:30Z" level=info msg="torcx already run" Feb 12 21:55:30.380753 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:55:30.380778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:55:30.412547 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:55:30.538759 systemd[1]: Started kubelet.service. Feb 12 21:55:30.558802 systemd[1]: Starting coreos-metadata.service... Feb 12 21:55:30.653820 kubelet[2056]: E0212 21:55:30.653489 2056 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:55:30.656854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:55:30.657062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:55:30.698342 coreos-metadata[2063]: Feb 12 21:55:30.698 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 21:55:30.699082 coreos-metadata[2063]: Feb 12 21:55:30.699 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 12 21:55:30.699605 coreos-metadata[2063]: Feb 12 21:55:30.699 INFO Fetch successful Feb 12 21:55:30.699675 coreos-metadata[2063]: Feb 12 21:55:30.699 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 12 21:55:30.700264 coreos-metadata[2063]: Feb 12 21:55:30.700 INFO Fetch successful Feb 12 21:55:30.700350 coreos-metadata[2063]: Feb 12 21:55:30.700 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 12 21:55:30.700688 coreos-metadata[2063]: Feb 12 21:55:30.700 INFO Fetch successful Feb 12 21:55:30.700755 coreos-metadata[2063]: Feb 12 21:55:30.700 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 12 21:55:30.701144 coreos-metadata[2063]: Feb 12 21:55:30.701 INFO Fetch successful Feb 12 21:55:30.701214 coreos-metadata[2063]: Feb 12 21:55:30.701 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 12 21:55:30.701574 coreos-metadata[2063]: Feb 12 21:55:30.701 INFO Fetch successful Feb 12 21:55:30.701642 coreos-metadata[2063]: Feb 12 21:55:30.701 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 12 21:55:30.702021 coreos-metadata[2063]: Feb 12 21:55:30.702 INFO Fetch successful Feb 12 21:55:30.702093 coreos-metadata[2063]: Feb 12 21:55:30.702 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 12 21:55:30.702434 coreos-metadata[2063]: Feb 12 21:55:30.702 INFO Fetch successful Feb 12 21:55:30.702527 coreos-metadata[2063]: Feb 12 21:55:30.702 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 12 21:55:30.702885 coreos-metadata[2063]: Feb 12 21:55:30.702 INFO Fetch successful Feb 12 21:55:30.715574 systemd[1]: Finished coreos-metadata.service. Feb 12 21:55:31.313304 systemd[1]: Stopped kubelet.service. Feb 12 21:55:31.332983 systemd[1]: Reloading. Feb 12 21:55:31.462050 /usr/lib/systemd/system-generators/torcx-generator[2127]: time="2024-02-12T21:55:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:55:31.466093 /usr/lib/systemd/system-generators/torcx-generator[2127]: time="2024-02-12T21:55:31Z" level=info msg="torcx already run" Feb 12 21:55:31.576360 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:55:31.576384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:55:31.599209 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:55:31.716511 systemd[1]: Started kubelet.service. Feb 12 21:55:31.792726 kubelet[2183]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:55:31.792726 kubelet[2183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:55:31.794830 kubelet[2183]: I0212 21:55:31.793298 2183 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:55:31.802303 kubelet[2183]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:55:31.802838 kubelet[2183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:55:32.224812 kubelet[2183]: I0212 21:55:32.223962 2183 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 21:55:32.224974 kubelet[2183]: I0212 21:55:32.224847 2183 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:55:32.225228 kubelet[2183]: I0212 21:55:32.225206 2183 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 21:55:32.227965 kubelet[2183]: I0212 21:55:32.227933 2183 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:55:32.233896 kubelet[2183]: I0212 21:55:32.233864 2183 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:55:32.234383 kubelet[2183]: I0212 21:55:32.234364 2183 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:55:32.234504 kubelet[2183]: I0212 21:55:32.234475 2183 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:55:32.234631 kubelet[2183]: I0212 21:55:32.234506 2183 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:55:32.234631 kubelet[2183]: I0212 21:55:32.234523 2183 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 21:55:32.234725 kubelet[2183]: I0212 21:55:32.234648 2183 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:55:32.238829 kubelet[2183]: I0212 21:55:32.238795 2183 kubelet.go:398] "Attempting to sync node with API server" Feb 12 21:55:32.238829 kubelet[2183]: I0212 21:55:32.238821 2183 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:55:32.238985 kubelet[2183]: I0212 21:55:32.238850 2183 kubelet.go:297] "Adding apiserver pod source" Feb 12 21:55:32.238985 kubelet[2183]: I0212 21:55:32.238867 2183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:55:32.241367 kubelet[2183]: E0212 21:55:32.241349 2183 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:32.241927 kubelet[2183]: E0212 21:55:32.241896 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:32.242178 kubelet[2183]: I0212 21:55:32.242158 2183 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:55:32.242700 kubelet[2183]: W0212 21:55:32.242671 2183 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 21:55:32.243122 kubelet[2183]: I0212 21:55:32.243106 2183 server.go:1186] "Started kubelet" Feb 12 21:55:32.247690 kubelet[2183]: I0212 21:55:32.247665 2183 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:55:32.249508 kubelet[2183]: I0212 21:55:32.249481 2183 server.go:451] "Adding debug handlers to kubelet server" Feb 12 21:55:32.250770 kubelet[2183]: E0212 21:55:32.250672 2183 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:55:32.250857 kubelet[2183]: E0212 21:55:32.250777 2183 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:55:32.256766 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 21:55:32.257274 kubelet[2183]: I0212 21:55:32.256978 2183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:55:32.258994 kubelet[2183]: I0212 21:55:32.258975 2183 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 21:55:32.259493 kubelet[2183]: I0212 21:55:32.259465 2183 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 21:55:32.261076 kubelet[2183]: W0212 21:55:32.261057 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:32.261215 kubelet[2183]: E0212 21:55:32.261204 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:32.261303 kubelet[2183]: W0212 21:55:32.261295 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:32.261352 kubelet[2183]: E0212 21:55:32.261346 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:32.261555 kubelet[2183]: E0212 21:55:32.261433 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b0aa6087", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 243079303, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 243079303, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.261759 kubelet[2183]: E0212 21:55:32.261738 2183 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.23.213" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:55:32.262552 kubelet[2183]: W0212 21:55:32.261922 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:32.262552 kubelet[2183]: E0212 21:55:32.261945 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:32.280505 kubelet[2183]: E0212 21:55:32.280396 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b11fa049", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 250763337, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 250763337, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.336803 kubelet[2183]: I0212 21:55:32.336783 2183 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:55:32.336930 kubelet[2183]: I0212 21:55:32.336923 2183 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:55:32.336981 kubelet[2183]: I0212 21:55:32.336975 2183 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:55:32.339512 kubelet[2183]: I0212 21:55:32.339493 2183 policy_none.go:49] "None policy: Start" Feb 12 21:55:32.339888 kubelet[2183]: E0212 21:55:32.339799 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.340552 kubelet[2183]: I0212 21:55:32.340538 2183 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:55:32.340639 kubelet[2183]: I0212 21:55:32.340632 2183 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:55:32.350452 kubelet[2183]: E0212 21:55:32.350351 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.352764 kubelet[2183]: E0212 21:55:32.352657 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.354999 kubelet[2183]: I0212 21:55:32.354969 2183 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:55:32.355473 kubelet[2183]: I0212 21:55:32.355453 2183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:55:32.356691 kubelet[2183]: E0212 21:55:32.356671 2183 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.213\" not found" Feb 12 21:55:32.357702 kubelet[2183]: E0212 21:55:32.357624 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b76b2434", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 356375604, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 356375604, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.360895 kubelet[2183]: I0212 21:55:32.360876 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:32.362212 kubelet[2183]: E0212 21:55:32.362195 2183 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.213" Feb 12 21:55:32.362961 kubelet[2183]: E0212 21:55:32.362840 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 360821383, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634b230" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.364169 kubelet[2183]: E0212 21:55:32.364113 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 360834047, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634c50f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.365598 kubelet[2183]: E0212 21:55:32.365530 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 360837539, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634cf89" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.464212 kubelet[2183]: E0212 21:55:32.464172 2183 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.23.213" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:55:32.496006 kubelet[2183]: I0212 21:55:32.492454 2183 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:55:32.530793 kubelet[2183]: I0212 21:55:32.530764 2183 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:55:32.530793 kubelet[2183]: I0212 21:55:32.530790 2183 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 21:55:32.531061 kubelet[2183]: I0212 21:55:32.530814 2183 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 21:55:32.531061 kubelet[2183]: E0212 21:55:32.530929 2183 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 21:55:32.535195 kubelet[2183]: W0212 21:55:32.535168 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:32.535400 kubelet[2183]: E0212 21:55:32.535201 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:32.563541 kubelet[2183]: I0212 21:55:32.563508 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:32.564984 kubelet[2183]: E0212 21:55:32.564958 2183 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.213" Feb 12 21:55:32.565823 kubelet[2183]: E0212 21:55:32.565742 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 563456981, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634b230" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.567043 kubelet[2183]: E0212 21:55:32.566975 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 563468956, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634c50f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.662619 kubelet[2183]: E0212 21:55:32.662519 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 563476544, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634cf89" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:32.866517 kubelet[2183]: E0212 21:55:32.866251 2183 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.23.213" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:55:32.966104 kubelet[2183]: I0212 21:55:32.966076 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:32.967966 kubelet[2183]: E0212 21:55:32.967942 2183 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.213" Feb 12 21:55:32.968510 kubelet[2183]: E0212 21:55:32.967927 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 965993273, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634b230" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:33.056209 kubelet[2183]: E0212 21:55:33.056101 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 966010239, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634c50f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:33.242601 kubelet[2183]: E0212 21:55:33.242480 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:33.256451 kubelet[2183]: E0212 21:55:33.256347 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 966044151, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634cf89" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:33.290612 kubelet[2183]: W0212 21:55:33.290571 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:33.290612 kubelet[2183]: E0212 21:55:33.290612 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:33.395672 kubelet[2183]: W0212 21:55:33.395636 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:33.395672 kubelet[2183]: E0212 21:55:33.395673 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:33.555615 kubelet[2183]: W0212 21:55:33.555497 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:33.555615 kubelet[2183]: E0212 21:55:33.555538 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:33.585930 kubelet[2183]: W0212 21:55:33.585895 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:33.585930 kubelet[2183]: E0212 21:55:33.585933 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:33.667869 kubelet[2183]: E0212 21:55:33.667737 2183 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.23.213" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:55:33.770476 kubelet[2183]: I0212 21:55:33.770420 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:33.772684 kubelet[2183]: E0212 21:55:33.772358 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 33, 769275953, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634b230" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:33.773821 kubelet[2183]: E0212 21:55:33.773797 2183 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.213" Feb 12 21:55:33.777926 kubelet[2183]: E0212 21:55:33.777825 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 33, 769303269, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634c50f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:33.855910 kubelet[2183]: E0212 21:55:33.855734 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 33, 770386071, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634cf89" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:34.242982 kubelet[2183]: E0212 21:55:34.242861 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:34.952769 kubelet[2183]: W0212 21:55:34.952734 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:34.952769 kubelet[2183]: E0212 21:55:34.952773 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:35.243147 kubelet[2183]: E0212 21:55:35.243025 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:35.269239 kubelet[2183]: E0212 21:55:35.269192 2183 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.23.213" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:55:35.374774 kubelet[2183]: I0212 21:55:35.374738 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:35.376408 kubelet[2183]: E0212 21:55:35.376381 2183 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.213" Feb 12 21:55:35.376968 kubelet[2183]: E0212 21:55:35.376888 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 35, 374681648, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634b230" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:35.378568 kubelet[2183]: E0212 21:55:35.378489 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 35, 374690829, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634c50f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:35.379450 kubelet[2183]: E0212 21:55:35.379375 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 35, 374696434, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634cf89" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:35.619838 kubelet[2183]: W0212 21:55:35.619720 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:35.619838 kubelet[2183]: E0212 21:55:35.619763 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:36.053790 kubelet[2183]: W0212 21:55:36.053683 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:36.053790 kubelet[2183]: E0212 21:55:36.053724 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:36.235952 kubelet[2183]: W0212 21:55:36.235914 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:36.235952 kubelet[2183]: E0212 21:55:36.235953 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:36.243166 kubelet[2183]: E0212 21:55:36.243129 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:37.243873 kubelet[2183]: E0212 21:55:37.243707 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:38.243970 kubelet[2183]: E0212 21:55:38.243929 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:38.474057 kubelet[2183]: E0212 21:55:38.474024 2183 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.23.213" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 12 21:55:38.578478 kubelet[2183]: I0212 21:55:38.578031 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:38.579307 kubelet[2183]: E0212 21:55:38.579281 2183 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.23.213" Feb 12 21:55:38.579406 kubelet[2183]: E0212 21:55:38.579284 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634b230", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.23.213 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336030256, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 38, 577967240, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634b230" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:38.580614 kubelet[2183]: E0212 21:55:38.580364 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.23.213 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336035087, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 38, 577976397, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634c50f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:38.585631 kubelet[2183]: E0212 21:55:38.585539 2183 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.23.213.17b33c40b634cf89", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.23.213", UID:"172.31.23.213", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.23.213 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.23.213"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 55, 32, 336037769, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 55, 38, 577982620, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.23.213.17b33c40b634cf89" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 12 21:55:39.244848 kubelet[2183]: E0212 21:55:39.244794 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:39.330342 kubelet[2183]: W0212 21:55:39.330308 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:39.330342 kubelet[2183]: E0212 21:55:39.330346 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 12 21:55:40.245661 kubelet[2183]: E0212 21:55:40.245610 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:41.086382 kubelet[2183]: W0212 21:55:41.086339 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:41.086382 kubelet[2183]: E0212 21:55:41.086380 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.23.213" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 12 21:55:41.164524 kubelet[2183]: W0212 21:55:41.164485 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:41.164524 kubelet[2183]: E0212 21:55:41.164525 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 12 21:55:41.246083 kubelet[2183]: E0212 21:55:41.246031 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:41.494305 kubelet[2183]: W0212 21:55:41.494200 2183 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:41.494305 kubelet[2183]: E0212 21:55:41.494235 2183 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 12 21:55:42.227286 kubelet[2183]: I0212 21:55:42.227240 2183 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 12 21:55:42.246736 kubelet[2183]: E0212 21:55:42.246667 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:42.356838 kubelet[2183]: E0212 21:55:42.356806 2183 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.213\" not found" Feb 12 21:55:42.627396 kubelet[2183]: E0212 21:55:42.627285 2183 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.23.213" not found Feb 12 21:55:43.246890 kubelet[2183]: E0212 21:55:43.246850 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:43.486539 amazon-ssm-agent[1781]: 2024-02-12 21:55:43 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 21:55:43.665025 kubelet[2183]: E0212 21:55:43.664842 2183 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.23.213" not found Feb 12 21:55:44.247776 kubelet[2183]: E0212 21:55:44.247724 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:44.879612 kubelet[2183]: E0212 21:55:44.879582 2183 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.23.213\" not found" node="172.31.23.213" Feb 12 21:55:44.980839 kubelet[2183]: I0212 21:55:44.980802 2183 kubelet_node_status.go:70] "Attempting to register node" node="172.31.23.213" Feb 12 21:55:45.067963 kubelet[2183]: I0212 21:55:45.067927 2183 kubelet_node_status.go:73] "Successfully registered node" node="172.31.23.213" Feb 12 21:55:45.092314 kubelet[2183]: E0212 21:55:45.092281 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.192861 kubelet[2183]: E0212 21:55:45.192752 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.245351 sudo[1969]: pam_unix(sudo:session): session closed for user root Feb 12 21:55:45.248397 kubelet[2183]: E0212 21:55:45.248366 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:45.268741 sshd[1965]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:45.272122 systemd[1]: sshd@4-172.31.23.213:22-139.178.89.65:54708.service: Deactivated successfully. Feb 12 21:55:45.273585 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 21:55:45.273624 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Feb 12 21:55:45.275500 systemd-logind[1706]: Removed session 5. Feb 12 21:55:45.293204 kubelet[2183]: E0212 21:55:45.293166 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.394038 kubelet[2183]: E0212 21:55:45.393997 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.495126 kubelet[2183]: E0212 21:55:45.495015 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.596011 kubelet[2183]: E0212 21:55:45.595970 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.696196 kubelet[2183]: E0212 21:55:45.696150 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.797196 kubelet[2183]: E0212 21:55:45.797086 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.898791 kubelet[2183]: E0212 21:55:45.898733 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:45.999539 kubelet[2183]: E0212 21:55:45.999497 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.100471 kubelet[2183]: E0212 21:55:46.100355 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.201071 kubelet[2183]: E0212 21:55:46.201027 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.248822 kubelet[2183]: E0212 21:55:46.248768 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:46.301256 kubelet[2183]: E0212 21:55:46.301206 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.402063 kubelet[2183]: E0212 21:55:46.402019 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.502242 kubelet[2183]: E0212 21:55:46.502190 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.603357 kubelet[2183]: E0212 21:55:46.603311 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.704044 kubelet[2183]: E0212 21:55:46.703927 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.805072 kubelet[2183]: E0212 21:55:46.804961 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:46.905679 kubelet[2183]: E0212 21:55:46.905622 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.006397 kubelet[2183]: E0212 21:55:47.006292 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.107067 kubelet[2183]: E0212 21:55:47.107018 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.208169 kubelet[2183]: E0212 21:55:47.208123 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.249549 kubelet[2183]: E0212 21:55:47.249509 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:47.309397 kubelet[2183]: E0212 21:55:47.309287 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.409950 kubelet[2183]: E0212 21:55:47.409905 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.510540 kubelet[2183]: E0212 21:55:47.510492 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.613919 kubelet[2183]: E0212 21:55:47.613757 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.714487 kubelet[2183]: E0212 21:55:47.714423 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.815483 kubelet[2183]: E0212 21:55:47.815438 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:47.916156 kubelet[2183]: E0212 21:55:47.916103 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.016745 kubelet[2183]: E0212 21:55:48.016701 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.117386 kubelet[2183]: E0212 21:55:48.117337 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.218249 kubelet[2183]: E0212 21:55:48.218140 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.250600 kubelet[2183]: E0212 21:55:48.250550 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:48.319334 kubelet[2183]: E0212 21:55:48.319084 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.420009 kubelet[2183]: E0212 21:55:48.419964 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.520706 kubelet[2183]: E0212 21:55:48.520598 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.621540 kubelet[2183]: E0212 21:55:48.621498 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.722090 kubelet[2183]: E0212 21:55:48.722051 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.823010 kubelet[2183]: E0212 21:55:48.822885 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:48.923606 kubelet[2183]: E0212 21:55:48.923538 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.024197 kubelet[2183]: E0212 21:55:49.024157 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.124876 kubelet[2183]: E0212 21:55:49.124760 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.225446 kubelet[2183]: E0212 21:55:49.225385 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.250882 kubelet[2183]: E0212 21:55:49.250838 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:49.326505 kubelet[2183]: E0212 21:55:49.326458 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.427318 kubelet[2183]: E0212 21:55:49.427275 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.528308 kubelet[2183]: E0212 21:55:49.528265 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.629238 kubelet[2183]: E0212 21:55:49.629190 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.730160 kubelet[2183]: E0212 21:55:49.730049 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.830842 kubelet[2183]: E0212 21:55:49.830793 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:49.931575 kubelet[2183]: E0212 21:55:49.931533 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.032455 kubelet[2183]: E0212 21:55:50.032308 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.133282 kubelet[2183]: E0212 21:55:50.133231 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.234202 kubelet[2183]: E0212 21:55:50.234153 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.251503 kubelet[2183]: E0212 21:55:50.251368 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:50.335167 kubelet[2183]: E0212 21:55:50.335059 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.436012 kubelet[2183]: E0212 21:55:50.435893 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.536802 kubelet[2183]: E0212 21:55:50.536754 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.637595 kubelet[2183]: E0212 21:55:50.637557 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.738270 kubelet[2183]: E0212 21:55:50.738214 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.839066 kubelet[2183]: E0212 21:55:50.839020 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:50.939752 kubelet[2183]: E0212 21:55:50.939644 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.040477 kubelet[2183]: E0212 21:55:51.040414 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.141222 kubelet[2183]: E0212 21:55:51.141170 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.242117 kubelet[2183]: E0212 21:55:51.242008 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.252283 kubelet[2183]: E0212 21:55:51.252230 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:51.342991 kubelet[2183]: E0212 21:55:51.342948 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.443952 kubelet[2183]: E0212 21:55:51.443906 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.544785 kubelet[2183]: E0212 21:55:51.544678 2183 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.23.213\" not found" Feb 12 21:55:51.645867 kubelet[2183]: I0212 21:55:51.645836 2183 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 12 21:55:51.646303 env[1715]: time="2024-02-12T21:55:51.646257792Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 21:55:51.646757 kubelet[2183]: I0212 21:55:51.646510 2183 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 12 21:55:52.239617 kubelet[2183]: E0212 21:55:52.239569 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:52.252856 kubelet[2183]: E0212 21:55:52.252816 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:52.252856 kubelet[2183]: I0212 21:55:52.252825 2183 apiserver.go:52] "Watching apiserver" Feb 12 21:55:52.266135 kubelet[2183]: I0212 21:55:52.266097 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:52.266291 kubelet[2183]: I0212 21:55:52.266185 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:52.360705 kubelet[2183]: I0212 21:55:52.360676 2183 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 21:55:52.386469 kubelet[2183]: I0212 21:55:52.386340 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hubble-tls\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386642 kubelet[2183]: I0212 21:55:52.386525 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9845cda3-a398-4258-89b0-4465462aa8b0-kube-proxy\") pod \"kube-proxy-jvrst\" (UID: \"9845cda3-a398-4258-89b0-4465462aa8b0\") " pod="kube-system/kube-proxy-jvrst" Feb 12 21:55:52.386642 kubelet[2183]: I0212 21:55:52.386583 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-run\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386642 kubelet[2183]: I0212 21:55:52.386622 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-cgroup\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386807 kubelet[2183]: I0212 21:55:52.386669 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-lib-modules\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386807 kubelet[2183]: I0212 21:55:52.386701 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/249d53c4-cc19-4f8f-9e1d-d212a04ea722-clustermesh-secrets\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386807 kubelet[2183]: I0212 21:55:52.386747 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-net\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386807 kubelet[2183]: I0212 21:55:52.386782 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-bpf-maps\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386976 kubelet[2183]: I0212 21:55:52.386816 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-kernel\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386976 kubelet[2183]: I0212 21:55:52.386879 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9845cda3-a398-4258-89b0-4465462aa8b0-xtables-lock\") pod \"kube-proxy-jvrst\" (UID: \"9845cda3-a398-4258-89b0-4465462aa8b0\") " pod="kube-system/kube-proxy-jvrst" Feb 12 21:55:52.386976 kubelet[2183]: I0212 21:55:52.386926 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cni-path\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.386976 kubelet[2183]: I0212 21:55:52.386961 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-xtables-lock\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.387146 kubelet[2183]: I0212 21:55:52.387000 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9845cda3-a398-4258-89b0-4465462aa8b0-lib-modules\") pod \"kube-proxy-jvrst\" (UID: \"9845cda3-a398-4258-89b0-4465462aa8b0\") " pod="kube-system/kube-proxy-jvrst" Feb 12 21:55:52.387146 kubelet[2183]: I0212 21:55:52.387035 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77smc\" (UniqueName: \"kubernetes.io/projected/9845cda3-a398-4258-89b0-4465462aa8b0-kube-api-access-77smc\") pod \"kube-proxy-jvrst\" (UID: \"9845cda3-a398-4258-89b0-4465462aa8b0\") " pod="kube-system/kube-proxy-jvrst" Feb 12 21:55:52.387146 kubelet[2183]: I0212 21:55:52.387084 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hostproc\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.387146 kubelet[2183]: I0212 21:55:52.387135 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-etc-cni-netd\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.387302 kubelet[2183]: I0212 21:55:52.387169 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-config-path\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.387302 kubelet[2183]: I0212 21:55:52.387201 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgshz\" (UniqueName: \"kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-kube-api-access-zgshz\") pod \"cilium-2xnfl\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " pod="kube-system/cilium-2xnfl" Feb 12 21:55:52.387302 kubelet[2183]: I0212 21:55:52.387219 2183 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:55:52.574069 env[1715]: time="2024-02-12T21:55:52.572450279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2xnfl,Uid:249d53c4-cc19-4f8f-9e1d-d212a04ea722,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:52.873496 env[1715]: time="2024-02-12T21:55:52.873239560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvrst,Uid:9845cda3-a398-4258-89b0-4465462aa8b0,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:53.148277 env[1715]: time="2024-02-12T21:55:53.148222860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.149700 env[1715]: time="2024-02-12T21:55:53.149660481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.156612 env[1715]: time="2024-02-12T21:55:53.156567359Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.158651 env[1715]: time="2024-02-12T21:55:53.158428941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.159737 env[1715]: time="2024-02-12T21:55:53.159703838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.160971 env[1715]: time="2024-02-12T21:55:53.160941087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.164188 env[1715]: time="2024-02-12T21:55:53.164156004Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.165268 env[1715]: time="2024-02-12T21:55:53.165239334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:53.205263 env[1715]: time="2024-02-12T21:55:53.202750928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:53.205263 env[1715]: time="2024-02-12T21:55:53.202784451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:53.205263 env[1715]: time="2024-02-12T21:55:53.202802023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:53.205263 env[1715]: time="2024-02-12T21:55:53.202955406Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db5a026046e3aa687c7c96208ab18ffe8e5da7464ecab9bb0540520edcb43858 pid=2277 runtime=io.containerd.runc.v2 Feb 12 21:55:53.208820 env[1715]: time="2024-02-12T21:55:53.208599464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:53.208820 env[1715]: time="2024-02-12T21:55:53.208644877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:53.208820 env[1715]: time="2024-02-12T21:55:53.208662202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:53.209052 env[1715]: time="2024-02-12T21:55:53.208864840Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097 pid=2292 runtime=io.containerd.runc.v2 Feb 12 21:55:53.253833 kubelet[2183]: E0212 21:55:53.253695 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:53.300415 env[1715]: time="2024-02-12T21:55:53.300362641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2xnfl,Uid:249d53c4-cc19-4f8f-9e1d-d212a04ea722,Namespace:kube-system,Attempt:0,} returns sandbox id \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\"" Feb 12 21:55:53.303336 env[1715]: time="2024-02-12T21:55:53.303294387Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 21:55:53.304781 env[1715]: time="2024-02-12T21:55:53.304738778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jvrst,Uid:9845cda3-a398-4258-89b0-4465462aa8b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"db5a026046e3aa687c7c96208ab18ffe8e5da7464ecab9bb0540520edcb43858\"" Feb 12 21:55:53.503399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640040616.mount: Deactivated successfully. Feb 12 21:55:54.138359 env[1715]: time="2024-02-12T21:55:54.138256923Z" level=error msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" failed" error="failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/3e/3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240212%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240212T215554Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=d6e85aaad2872dcc71eb13496b0c41b2c2e2f666dd5aeb6fec6f95abd8c88e48&cf_sign=H6WAVjs3jQNneR577XF5iKAbIW5dvg0XF4bI%2FU1iF3wJCziFOloh7J4mgikYkLHm42nkutFHK1PQ1L6SShSmnUOn58TS2Xp9xkXei9h6ieAOqf%2BbLcbSxJyvQFaZh2ZP9Ql5fn732HukGMDxqLjxbJ%2F62TbuecyFT%2Bl7h4Ip%2B%2FCZXK%2FTJYLxnE9EZ6wdnAX7Qe0X34xW5jvEhFOy2fOjf%2FLXY6sCqGfxkLrzXhTqtsdaq%2Bo9BWlHGMlRVSrKgIwXNakbgfhvK1sJETJ1HxNc59fI6xTwLKRN4uOq0jvnuz91kwRwjDX9IsRXCouqWvOpkaKRashb5nud7r2IOGucnA%3D%3D&cf_expiry=1707775554®ion=us-east-1&namespace=cilium\": dial tcp: lookup cdn03.quay.io: no such host" Feb 12 21:55:54.138890 kubelet[2183]: E0212 21:55:54.138796 2183 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/3e/3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240212%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240212T215554Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=d6e85aaad2872dcc71eb13496b0c41b2c2e2f666dd5aeb6fec6f95abd8c88e48&cf_sign=H6WAVjs3jQNneR577XF5iKAbIW5dvg0XF4bI%2FU1iF3wJCziFOloh7J4mgikYkLHm42nkutFHK1PQ1L6SShSmnUOn58TS2Xp9xkXei9h6ieAOqf%2BbLcbSxJyvQFaZh2ZP9Ql5fn732HukGMDxqLjxbJ%2F62TbuecyFT%2Bl7h4Ip%2B%2FCZXK%2FTJYLxnE9EZ6wdnAX7Qe0X34xW5jvEhFOy2fOjf%2FLXY6sCqGfxkLrzXhTqtsdaq%2Bo9BWlHGMlRVSrKgIwXNakbgfhvK1sJETJ1HxNc59fI6xTwLKRN4uOq0jvnuz91kwRwjDX9IsRXCouqWvOpkaKRashb5nud7r2IOGucnA%3D%3D&cf_expiry=1707775554®ion=us-east-1&namespace=cilium\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 12 21:55:54.139405 kubelet[2183]: E0212 21:55:54.138883 2183 kuberuntime_image.go:53] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/3e/3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240212%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240212T215554Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=d6e85aaad2872dcc71eb13496b0c41b2c2e2f666dd5aeb6fec6f95abd8c88e48&cf_sign=H6WAVjs3jQNneR577XF5iKAbIW5dvg0XF4bI%2FU1iF3wJCziFOloh7J4mgikYkLHm42nkutFHK1PQ1L6SShSmnUOn58TS2Xp9xkXei9h6ieAOqf%2BbLcbSxJyvQFaZh2ZP9Ql5fn732HukGMDxqLjxbJ%2F62TbuecyFT%2Bl7h4Ip%2B%2FCZXK%2FTJYLxnE9EZ6wdnAX7Qe0X34xW5jvEhFOy2fOjf%2FLXY6sCqGfxkLrzXhTqtsdaq%2Bo9BWlHGMlRVSrKgIwXNakbgfhvK1sJETJ1HxNc59fI6xTwLKRN4uOq0jvnuz91kwRwjDX9IsRXCouqWvOpkaKRashb5nud7r2IOGucnA%3D%3D&cf_expiry=1707775554®ion=us-east-1&namespace=cilium\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5" Feb 12 21:55:54.139692 kubelet[2183]: E0212 21:55:54.139668 2183 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 21:55:54.139692 kubelet[2183]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 21:55:54.139692 kubelet[2183]: rm /hostbin/cilium-mount Feb 12 21:55:54.140232 kubelet[2183]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zgshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:unconfined_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-2xnfl_kube-system(249d53c4-cc19-4f8f-9e1d-d212a04ea722): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5": failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://cdn03.quay.io/quayio-production-s3/sha256/3e/3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240212%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240212T215554Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=d6e85aaad2872dcc71eb13496b0c41b2c2e2f666dd5aeb6fec6f95abd8c88e48&cf_sign=H6WAVjs3jQNneR577XF5iKAbIW5dvg0XF4bI%2FU1iF3wJCziFOloh7J4mgikYkLHm42nkutFHK1PQ1L6SShSmnUOn58TS2Xp9xkXei9h6ieAOqf%2BbLcbSxJyvQFaZh2ZP9Ql5fn732HukGMDxqLjxbJ%2F62TbuecyFT%2Bl7h4Ip%2B%2FCZXK%2FTJYLxnE9EZ6wdnAX7Qe0X34xW5jvEhFOy2fOjf%2FLXY6sCqGfxkLrzXhTqtsdaq%2Bo9BWlHGMlRVSrKgIwXNakbgfhvK1sJETJ1HxNc59fI6xTwLKRN4uOq0jvnuz91kwRwjDX9IsRXCouqWvOpkaKRashb5nud7r2IOGucnA%3D%3D&cf_expiry=1707775554®ion=us-east-1&namespace=cilium": dial tcp: lookup cdn03.quay.io: no such host Feb 12 21:55:54.140232 kubelet[2183]: E0212 21:55:54.140205 2183 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn03.quay.io/quayio-production-s3/sha256/3e/3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240212%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240212T215554Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=d6e85aaad2872dcc71eb13496b0c41b2c2e2f666dd5aeb6fec6f95abd8c88e48&cf_sign=H6WAVjs3jQNneR577XF5iKAbIW5dvg0XF4bI%2FU1iF3wJCziFOloh7J4mgikYkLHm42nkutFHK1PQ1L6SShSmnUOn58TS2Xp9xkXei9h6ieAOqf%2BbLcbSxJyvQFaZh2ZP9Ql5fn732HukGMDxqLjxbJ%2F62TbuecyFT%2Bl7h4Ip%2B%2FCZXK%2FTJYLxnE9EZ6wdnAX7Qe0X34xW5jvEhFOy2fOjf%2FLXY6sCqGfxkLrzXhTqtsdaq%2Bo9BWlHGMlRVSrKgIwXNakbgfhvK1sJETJ1HxNc59fI6xTwLKRN4uOq0jvnuz91kwRwjDX9IsRXCouqWvOpkaKRashb5nud7r2IOGucnA%3D%3D&cf_expiry=1707775554®ion=us-east-1&namespace=cilium\\\": dial tcp: lookup cdn03.quay.io: no such host\"" pod="kube-system/cilium-2xnfl" podUID=249d53c4-cc19-4f8f-9e1d-d212a04ea722 Feb 12 21:55:54.140777 env[1715]: time="2024-02-12T21:55:54.140728507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 21:55:54.254836 kubelet[2183]: E0212 21:55:54.254801 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:54.573903 kubelet[2183]: E0212 21:55:54.573778 2183 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"\"" pod="kube-system/cilium-2xnfl" podUID=249d53c4-cc19-4f8f-9e1d-d212a04ea722 Feb 12 21:55:55.255325 kubelet[2183]: E0212 21:55:55.255268 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:55.344540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3448752334.mount: Deactivated successfully. Feb 12 21:55:55.905683 env[1715]: time="2024-02-12T21:55:55.905628573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:55.920498 env[1715]: time="2024-02-12T21:55:55.920454086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:55.931554 env[1715]: time="2024-02-12T21:55:55.931506290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:55.937519 env[1715]: time="2024-02-12T21:55:55.937471521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:55.938011 env[1715]: time="2024-02-12T21:55:55.937976578Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 21:55:55.940596 env[1715]: time="2024-02-12T21:55:55.940562740Z" level=info msg="CreateContainer within sandbox \"db5a026046e3aa687c7c96208ab18ffe8e5da7464ecab9bb0540520edcb43858\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 21:55:55.973109 env[1715]: time="2024-02-12T21:55:55.973065657Z" level=info msg="CreateContainer within sandbox \"db5a026046e3aa687c7c96208ab18ffe8e5da7464ecab9bb0540520edcb43858\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fb203fb372dbc87326029cbaece4e1b6278c1bb9ebb53cdf35e4ee00d080026\"" Feb 12 21:55:55.974157 env[1715]: time="2024-02-12T21:55:55.974092799Z" level=info msg="StartContainer for \"4fb203fb372dbc87326029cbaece4e1b6278c1bb9ebb53cdf35e4ee00d080026\"" Feb 12 21:55:55.997843 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 21:55:56.064856 env[1715]: time="2024-02-12T21:55:56.063673707Z" level=info msg="StartContainer for \"4fb203fb372dbc87326029cbaece4e1b6278c1bb9ebb53cdf35e4ee00d080026\" returns successfully" Feb 12 21:55:56.256303 kubelet[2183]: E0212 21:55:56.256168 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:57.257093 kubelet[2183]: E0212 21:55:57.257054 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:58.261409 kubelet[2183]: E0212 21:55:58.261355 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:55:59.261899 kubelet[2183]: E0212 21:55:59.261846 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:00.262120 kubelet[2183]: E0212 21:56:00.262067 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:01.263186 kubelet[2183]: E0212 21:56:01.263137 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:02.263398 kubelet[2183]: E0212 21:56:02.263345 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:03.263672 kubelet[2183]: E0212 21:56:03.263620 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:04.264364 kubelet[2183]: E0212 21:56:04.264300 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:05.265201 kubelet[2183]: E0212 21:56:05.265163 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:05.532892 env[1715]: time="2024-02-12T21:56:05.532638431Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 21:56:05.546051 kubelet[2183]: I0212 21:56:05.546020 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jvrst" podStartSLOduration=-9.223372016308811e+09 pod.CreationTimestamp="2024-02-12 21:55:45 +0000 UTC" firstStartedPulling="2024-02-12 21:55:53.306589312 +0000 UTC m=+21.584037353" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:56.600595541 +0000 UTC m=+24.878043381" watchObservedRunningTime="2024-02-12 21:56:05.545964527 +0000 UTC m=+33.823412365" Feb 12 21:56:06.265386 kubelet[2183]: E0212 21:56:06.265345 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:07.266385 kubelet[2183]: E0212 21:56:07.266350 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:08.267350 kubelet[2183]: E0212 21:56:08.267284 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:09.268155 kubelet[2183]: E0212 21:56:09.268110 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:10.268915 kubelet[2183]: E0212 21:56:10.268824 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:10.982585 update_engine[1707]: I0212 21:56:10.982487 1707 update_attempter.cc:509] Updating boot flags... Feb 12 21:56:11.269997 kubelet[2183]: E0212 21:56:11.269812 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:11.943413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292325920.mount: Deactivated successfully. Feb 12 21:56:12.240274 kubelet[2183]: E0212 21:56:12.239826 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:12.270578 kubelet[2183]: E0212 21:56:12.270498 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:13.270996 kubelet[2183]: E0212 21:56:13.270956 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:13.513442 amazon-ssm-agent[1781]: 2024-02-12 21:56:13 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 21:56:14.272063 kubelet[2183]: E0212 21:56:14.272026 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:15.272701 kubelet[2183]: E0212 21:56:15.272631 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:15.634631 env[1715]: time="2024-02-12T21:56:15.633976063Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:15.637898 env[1715]: time="2024-02-12T21:56:15.637859112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:15.640774 env[1715]: time="2024-02-12T21:56:15.640737782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:15.641755 env[1715]: time="2024-02-12T21:56:15.641676828Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 21:56:15.645451 env[1715]: time="2024-02-12T21:56:15.645394353Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:56:15.668268 env[1715]: time="2024-02-12T21:56:15.668219280Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\"" Feb 12 21:56:15.668989 env[1715]: time="2024-02-12T21:56:15.668955138Z" level=info msg="StartContainer for \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\"" Feb 12 21:56:15.774715 env[1715]: time="2024-02-12T21:56:15.772672007Z" level=info msg="StartContainer for \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\" returns successfully" Feb 12 21:56:16.155513 env[1715]: time="2024-02-12T21:56:16.155459527Z" level=info msg="shim disconnected" id=7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c Feb 12 21:56:16.155513 env[1715]: time="2024-02-12T21:56:16.155510286Z" level=warning msg="cleaning up after shim disconnected" id=7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c namespace=k8s.io Feb 12 21:56:16.156665 env[1715]: time="2024-02-12T21:56:16.155522340Z" level=info msg="cleaning up dead shim" Feb 12 21:56:16.172771 env[1715]: time="2024-02-12T21:56:16.172725593Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2644 runtime=io.containerd.runc.v2\n" Feb 12 21:56:16.273317 kubelet[2183]: E0212 21:56:16.273271 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:16.638699 env[1715]: time="2024-02-12T21:56:16.638644542Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:56:16.656162 systemd[1]: run-containerd-runc-k8s.io-7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c-runc.eJMhgo.mount: Deactivated successfully. Feb 12 21:56:16.656586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c-rootfs.mount: Deactivated successfully. Feb 12 21:56:16.679200 env[1715]: time="2024-02-12T21:56:16.679149316Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\"" Feb 12 21:56:16.680000 env[1715]: time="2024-02-12T21:56:16.679968492Z" level=info msg="StartContainer for \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\"" Feb 12 21:56:16.754141 env[1715]: time="2024-02-12T21:56:16.754096494Z" level=info msg="StartContainer for \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\" returns successfully" Feb 12 21:56:16.759753 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:56:16.760290 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:56:16.762051 systemd[1]: Stopping systemd-sysctl.service... Feb 12 21:56:16.766157 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:56:16.786682 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:56:16.815795 env[1715]: time="2024-02-12T21:56:16.815736650Z" level=info msg="shim disconnected" id=73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd Feb 12 21:56:16.815795 env[1715]: time="2024-02-12T21:56:16.815794006Z" level=warning msg="cleaning up after shim disconnected" id=73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd namespace=k8s.io Feb 12 21:56:16.816157 env[1715]: time="2024-02-12T21:56:16.815806853Z" level=info msg="cleaning up dead shim" Feb 12 21:56:16.826518 env[1715]: time="2024-02-12T21:56:16.826465658Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2710 runtime=io.containerd.runc.v2\n" Feb 12 21:56:17.273988 kubelet[2183]: E0212 21:56:17.273945 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:17.636342 env[1715]: time="2024-02-12T21:56:17.636292486Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:56:17.656053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd-rootfs.mount: Deactivated successfully. Feb 12 21:56:17.661055 env[1715]: time="2024-02-12T21:56:17.661004487Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\"" Feb 12 21:56:17.662056 env[1715]: time="2024-02-12T21:56:17.662014660Z" level=info msg="StartContainer for \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\"" Feb 12 21:56:17.705787 systemd[1]: run-containerd-runc-k8s.io-82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43-runc.i4huFQ.mount: Deactivated successfully. Feb 12 21:56:17.742642 env[1715]: time="2024-02-12T21:56:17.742594315Z" level=info msg="StartContainer for \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\" returns successfully" Feb 12 21:56:17.768009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43-rootfs.mount: Deactivated successfully. Feb 12 21:56:17.786174 env[1715]: time="2024-02-12T21:56:17.786119203Z" level=info msg="shim disconnected" id=82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43 Feb 12 21:56:17.786174 env[1715]: time="2024-02-12T21:56:17.786173345Z" level=warning msg="cleaning up after shim disconnected" id=82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43 namespace=k8s.io Feb 12 21:56:17.786588 env[1715]: time="2024-02-12T21:56:17.786185729Z" level=info msg="cleaning up dead shim" Feb 12 21:56:17.800116 env[1715]: time="2024-02-12T21:56:17.800070122Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2767 runtime=io.containerd.runc.v2\n" Feb 12 21:56:18.276377 kubelet[2183]: E0212 21:56:18.276272 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:18.636958 env[1715]: time="2024-02-12T21:56:18.636921833Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:56:18.664564 env[1715]: time="2024-02-12T21:56:18.664512902Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\"" Feb 12 21:56:18.665649 env[1715]: time="2024-02-12T21:56:18.665615181Z" level=info msg="StartContainer for \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\"" Feb 12 21:56:18.697103 systemd[1]: run-containerd-runc-k8s.io-bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f-runc.8pRbVU.mount: Deactivated successfully. Feb 12 21:56:18.738873 env[1715]: time="2024-02-12T21:56:18.738824742Z" level=info msg="StartContainer for \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\" returns successfully" Feb 12 21:56:18.777335 env[1715]: time="2024-02-12T21:56:18.777251103Z" level=info msg="shim disconnected" id=bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f Feb 12 21:56:18.777335 env[1715]: time="2024-02-12T21:56:18.777334832Z" level=warning msg="cleaning up after shim disconnected" id=bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f namespace=k8s.io Feb 12 21:56:18.777680 env[1715]: time="2024-02-12T21:56:18.777349906Z" level=info msg="cleaning up dead shim" Feb 12 21:56:18.787967 env[1715]: time="2024-02-12T21:56:18.787890822Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:56:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2827 runtime=io.containerd.runc.v2\n" Feb 12 21:56:19.276791 kubelet[2183]: E0212 21:56:19.276745 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:19.641776 env[1715]: time="2024-02-12T21:56:19.641725725Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:56:19.656693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f-rootfs.mount: Deactivated successfully. Feb 12 21:56:19.663183 env[1715]: time="2024-02-12T21:56:19.663130427Z" level=info msg="CreateContainer within sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\"" Feb 12 21:56:19.663851 env[1715]: time="2024-02-12T21:56:19.663804094Z" level=info msg="StartContainer for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\"" Feb 12 21:56:19.729161 env[1715]: time="2024-02-12T21:56:19.728915228Z" level=info msg="StartContainer for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" returns successfully" Feb 12 21:56:19.841640 kubelet[2183]: I0212 21:56:19.840073 2183 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 21:56:20.194477 kernel: Initializing XFRM netlink socket Feb 12 21:56:20.277397 kubelet[2183]: E0212 21:56:20.277327 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:20.657006 systemd[1]: run-containerd-runc-k8s.io-71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055-runc.0XQcLS.mount: Deactivated successfully. Feb 12 21:56:20.671496 kubelet[2183]: I0212 21:56:20.671461 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2xnfl" podStartSLOduration=-9.223372001183376e+09 pod.CreationTimestamp="2024-02-12 21:55:45 +0000 UTC" firstStartedPulling="2024-02-12 21:55:53.30264292 +0000 UTC m=+21.580090740" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:20.671383558 +0000 UTC m=+48.948831398" watchObservedRunningTime="2024-02-12 21:56:20.671399269 +0000 UTC m=+48.948847112" Feb 12 21:56:21.278382 kubelet[2183]: E0212 21:56:21.278337 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:21.853340 systemd-networkd[1513]: cilium_host: Link UP Feb 12 21:56:21.853649 systemd-networkd[1513]: cilium_net: Link UP Feb 12 21:56:21.853655 systemd-networkd[1513]: cilium_net: Gained carrier Feb 12 21:56:21.853846 systemd-networkd[1513]: cilium_host: Gained carrier Feb 12 21:56:21.860617 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 21:56:21.860843 systemd-networkd[1513]: cilium_host: Gained IPv6LL Feb 12 21:56:21.863237 (udev-worker)[2970]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:56:21.863839 (udev-worker)[2971]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:56:22.058771 (udev-worker)[2928]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:56:22.075157 systemd-networkd[1513]: cilium_vxlan: Link UP Feb 12 21:56:22.075166 systemd-networkd[1513]: cilium_vxlan: Gained carrier Feb 12 21:56:22.278634 kubelet[2183]: E0212 21:56:22.278577 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:22.378456 kernel: NET: Registered PF_ALG protocol family Feb 12 21:56:22.754608 systemd-networkd[1513]: cilium_net: Gained IPv6LL Feb 12 21:56:23.279354 kubelet[2183]: E0212 21:56:23.279320 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:23.349039 systemd-networkd[1513]: lxc_health: Link UP Feb 12 21:56:23.381045 systemd-networkd[1513]: lxc_health: Gained carrier Feb 12 21:56:23.381791 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:56:24.065625 systemd-networkd[1513]: cilium_vxlan: Gained IPv6LL Feb 12 21:56:24.280157 kubelet[2183]: E0212 21:56:24.280118 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:24.482741 systemd-networkd[1513]: lxc_health: Gained IPv6LL Feb 12 21:56:25.281556 kubelet[2183]: E0212 21:56:25.281517 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:25.836750 kubelet[2183]: I0212 21:56:25.836698 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:56:25.886919 kubelet[2183]: I0212 21:56:25.886607 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgsmb\" (UniqueName: \"kubernetes.io/projected/44efac92-a557-433d-9531-6e987831830e-kube-api-access-kgsmb\") pod \"nginx-deployment-8ffc5cf85-fnjsr\" (UID: \"44efac92-a557-433d-9531-6e987831830e\") " pod="default/nginx-deployment-8ffc5cf85-fnjsr" Feb 12 21:56:26.143608 env[1715]: time="2024-02-12T21:56:26.143470361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-fnjsr,Uid:44efac92-a557-433d-9531-6e987831830e,Namespace:default,Attempt:0,}" Feb 12 21:56:26.262721 systemd-networkd[1513]: lxcb5eb31d078f6: Link UP Feb 12 21:56:26.267560 kernel: eth0: renamed from tmp25162 Feb 12 21:56:26.275985 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:56:26.276106 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb5eb31d078f6: link becomes ready Feb 12 21:56:26.276039 systemd-networkd[1513]: lxcb5eb31d078f6: Gained carrier Feb 12 21:56:26.282345 kubelet[2183]: E0212 21:56:26.282259 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:27.283140 kubelet[2183]: E0212 21:56:27.283108 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:27.555863 systemd-networkd[1513]: lxcb5eb31d078f6: Gained IPv6LL Feb 12 21:56:28.284007 kubelet[2183]: E0212 21:56:28.283963 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:29.284345 kubelet[2183]: E0212 21:56:29.284305 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:29.430906 env[1715]: time="2024-02-12T21:56:29.430792935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:56:29.432098 env[1715]: time="2024-02-12T21:56:29.432019112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:56:29.432465 env[1715]: time="2024-02-12T21:56:29.432367749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:56:29.433149 env[1715]: time="2024-02-12T21:56:29.433071819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2516251401231640461c6c1384bfab247c616685d95d7085c78ae16d1e7fcc35 pid=3346 runtime=io.containerd.runc.v2 Feb 12 21:56:29.466666 systemd[1]: run-containerd-runc-k8s.io-2516251401231640461c6c1384bfab247c616685d95d7085c78ae16d1e7fcc35-runc.tD8XxZ.mount: Deactivated successfully. Feb 12 21:56:29.520069 env[1715]: time="2024-02-12T21:56:29.520026794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-fnjsr,Uid:44efac92-a557-433d-9531-6e987831830e,Namespace:default,Attempt:0,} returns sandbox id \"2516251401231640461c6c1384bfab247c616685d95d7085c78ae16d1e7fcc35\"" Feb 12 21:56:29.524538 env[1715]: time="2024-02-12T21:56:29.524501968Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 21:56:30.285070 kubelet[2183]: E0212 21:56:30.285013 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:31.286006 kubelet[2183]: E0212 21:56:31.285967 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:32.240077 kubelet[2183]: E0212 21:56:32.240035 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:32.286834 kubelet[2183]: E0212 21:56:32.286797 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:33.046614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121766925.mount: Deactivated successfully. Feb 12 21:56:33.287833 kubelet[2183]: E0212 21:56:33.287758 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:34.224182 env[1715]: time="2024-02-12T21:56:34.224131570Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:34.227523 env[1715]: time="2024-02-12T21:56:34.227474598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:34.239978 env[1715]: time="2024-02-12T21:56:34.239923576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:34.241390 env[1715]: time="2024-02-12T21:56:34.240933894Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 21:56:34.243715 env[1715]: time="2024-02-12T21:56:34.243682951Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:34.245773 env[1715]: time="2024-02-12T21:56:34.245652980Z" level=info msg="CreateContainer within sandbox \"2516251401231640461c6c1384bfab247c616685d95d7085c78ae16d1e7fcc35\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 12 21:56:34.266073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146556992.mount: Deactivated successfully. Feb 12 21:56:34.274778 env[1715]: time="2024-02-12T21:56:34.274731466Z" level=info msg="CreateContainer within sandbox \"2516251401231640461c6c1384bfab247c616685d95d7085c78ae16d1e7fcc35\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6832e5c53b130c2d6a1502e305aa1637a301d6e02ff82b288ec1e4752b125f1b\"" Feb 12 21:56:34.276238 env[1715]: time="2024-02-12T21:56:34.276201437Z" level=info msg="StartContainer for \"6832e5c53b130c2d6a1502e305aa1637a301d6e02ff82b288ec1e4752b125f1b\"" Feb 12 21:56:34.288785 kubelet[2183]: E0212 21:56:34.288741 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:34.388255 env[1715]: time="2024-02-12T21:56:34.388207106Z" level=info msg="StartContainer for \"6832e5c53b130c2d6a1502e305aa1637a301d6e02ff82b288ec1e4752b125f1b\" returns successfully" Feb 12 21:56:34.702695 kubelet[2183]: I0212 21:56:34.702654 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-fnjsr" podStartSLOduration=-9.223372027152159e+09 pod.CreationTimestamp="2024-02-12 21:56:25 +0000 UTC" firstStartedPulling="2024-02-12 21:56:29.523980528 +0000 UTC m=+57.801428352" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:34.702392928 +0000 UTC m=+62.979840771" watchObservedRunningTime="2024-02-12 21:56:34.702617346 +0000 UTC m=+62.980065190" Feb 12 21:56:35.289799 kubelet[2183]: E0212 21:56:35.289745 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:36.290302 kubelet[2183]: E0212 21:56:36.290253 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:37.290763 kubelet[2183]: E0212 21:56:37.290711 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:38.292048 kubelet[2183]: E0212 21:56:38.292006 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:39.293072 kubelet[2183]: E0212 21:56:39.293019 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:40.028474 kubelet[2183]: I0212 21:56:40.028350 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:56:40.173995 kubelet[2183]: I0212 21:56:40.173946 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/8e688f7a-658a-4aaa-8403-f97ec28afc71-data\") pod \"nfs-server-provisioner-0\" (UID: \"8e688f7a-658a-4aaa-8403-f97ec28afc71\") " pod="default/nfs-server-provisioner-0" Feb 12 21:56:40.174244 kubelet[2183]: I0212 21:56:40.174220 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnccq\" (UniqueName: \"kubernetes.io/projected/8e688f7a-658a-4aaa-8403-f97ec28afc71-kube-api-access-fnccq\") pod \"nfs-server-provisioner-0\" (UID: \"8e688f7a-658a-4aaa-8403-f97ec28afc71\") " pod="default/nfs-server-provisioner-0" Feb 12 21:56:40.298241 kubelet[2183]: E0212 21:56:40.297901 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:40.338165 env[1715]: time="2024-02-12T21:56:40.337763600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8e688f7a-658a-4aaa-8403-f97ec28afc71,Namespace:default,Attempt:0,}" Feb 12 21:56:40.390077 systemd-networkd[1513]: lxc08f825a7ddd4: Link UP Feb 12 21:56:40.394209 (udev-worker)[3466]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:56:40.395490 kernel: eth0: renamed from tmp158e4 Feb 12 21:56:40.402451 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:56:40.402557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc08f825a7ddd4: link becomes ready Feb 12 21:56:40.402750 systemd-networkd[1513]: lxc08f825a7ddd4: Gained carrier Feb 12 21:56:40.719506 env[1715]: time="2024-02-12T21:56:40.719411100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:56:40.719994 env[1715]: time="2024-02-12T21:56:40.719482950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:56:40.719994 env[1715]: time="2024-02-12T21:56:40.719498563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:56:40.719994 env[1715]: time="2024-02-12T21:56:40.719875051Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/158e4ef75908d5d100ba2242ab8756d4b233af0a2b39c01724d49e5d89d68301 pid=3520 runtime=io.containerd.runc.v2 Feb 12 21:56:40.799664 env[1715]: time="2024-02-12T21:56:40.799629253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:8e688f7a-658a-4aaa-8403-f97ec28afc71,Namespace:default,Attempt:0,} returns sandbox id \"158e4ef75908d5d100ba2242ab8756d4b233af0a2b39c01724d49e5d89d68301\"" Feb 12 21:56:40.801658 env[1715]: time="2024-02-12T21:56:40.801629587Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 12 21:56:41.300425 kubelet[2183]: E0212 21:56:41.300382 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:41.570817 systemd-networkd[1513]: lxc08f825a7ddd4: Gained IPv6LL Feb 12 21:56:42.301401 kubelet[2183]: E0212 21:56:42.301362 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:43.302732 kubelet[2183]: E0212 21:56:43.302625 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:44.303330 kubelet[2183]: E0212 21:56:44.303259 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:44.521899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750093247.mount: Deactivated successfully. Feb 12 21:56:45.303839 kubelet[2183]: E0212 21:56:45.303766 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:46.304574 kubelet[2183]: E0212 21:56:46.304510 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:47.219766 env[1715]: time="2024-02-12T21:56:47.219682704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:47.222911 env[1715]: time="2024-02-12T21:56:47.222873760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:47.225179 env[1715]: time="2024-02-12T21:56:47.225146933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:47.227849 env[1715]: time="2024-02-12T21:56:47.227810851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:47.228718 env[1715]: time="2024-02-12T21:56:47.228683001Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 12 21:56:47.231218 env[1715]: time="2024-02-12T21:56:47.231188358Z" level=info msg="CreateContainer within sandbox \"158e4ef75908d5d100ba2242ab8756d4b233af0a2b39c01724d49e5d89d68301\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 12 21:56:47.248322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628382976.mount: Deactivated successfully. Feb 12 21:56:47.260790 env[1715]: time="2024-02-12T21:56:47.260755467Z" level=info msg="CreateContainer within sandbox \"158e4ef75908d5d100ba2242ab8756d4b233af0a2b39c01724d49e5d89d68301\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7d99e556d7020ffb04ac5cf9fe3233b140ae77be5779f8ae265c163a0ef9a65e\"" Feb 12 21:56:47.262088 env[1715]: time="2024-02-12T21:56:47.262059744Z" level=info msg="StartContainer for \"7d99e556d7020ffb04ac5cf9fe3233b140ae77be5779f8ae265c163a0ef9a65e\"" Feb 12 21:56:47.306462 kubelet[2183]: E0212 21:56:47.305031 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:47.353481 env[1715]: time="2024-02-12T21:56:47.353410994Z" level=info msg="StartContainer for \"7d99e556d7020ffb04ac5cf9fe3233b140ae77be5779f8ae265c163a0ef9a65e\" returns successfully" Feb 12 21:56:47.722209 kubelet[2183]: I0212 21:56:47.722179 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372029132635e+09 pod.CreationTimestamp="2024-02-12 21:56:40 +0000 UTC" firstStartedPulling="2024-02-12 21:56:40.801107113 +0000 UTC m=+69.078554934" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:47.721758904 +0000 UTC m=+75.999206746" watchObservedRunningTime="2024-02-12 21:56:47.722140797 +0000 UTC m=+75.999588636" Feb 12 21:56:48.305397 kubelet[2183]: E0212 21:56:48.305355 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:49.305950 kubelet[2183]: E0212 21:56:49.305879 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:50.306988 kubelet[2183]: E0212 21:56:50.306949 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:51.308086 kubelet[2183]: E0212 21:56:51.308037 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:52.239778 kubelet[2183]: E0212 21:56:52.239676 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:52.309189 kubelet[2183]: E0212 21:56:52.309134 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:53.309607 kubelet[2183]: E0212 21:56:53.309557 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:54.310096 kubelet[2183]: E0212 21:56:54.310054 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:55.311544 kubelet[2183]: E0212 21:56:55.311496 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:56.311699 kubelet[2183]: E0212 21:56:56.311653 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:56.932512 kubelet[2183]: I0212 21:56:56.932467 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:56:57.072968 kubelet[2183]: I0212 21:56:57.072932 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsvn\" (UniqueName: \"kubernetes.io/projected/b05449fb-9eff-4d73-aa5e-71bda614e75a-kube-api-access-vbsvn\") pod \"test-pod-1\" (UID: \"b05449fb-9eff-4d73-aa5e-71bda614e75a\") " pod="default/test-pod-1" Feb 12 21:56:57.073161 kubelet[2183]: I0212 21:56:57.073047 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf4982ec-1d3e-44ca-80d7-32b111959e67\" (UniqueName: \"kubernetes.io/nfs/b05449fb-9eff-4d73-aa5e-71bda614e75a-pvc-cf4982ec-1d3e-44ca-80d7-32b111959e67\") pod \"test-pod-1\" (UID: \"b05449fb-9eff-4d73-aa5e-71bda614e75a\") " pod="default/test-pod-1" Feb 12 21:56:57.226461 kernel: FS-Cache: Loaded Feb 12 21:56:57.277691 kernel: RPC: Registered named UNIX socket transport module. Feb 12 21:56:57.277839 kernel: RPC: Registered udp transport module. Feb 12 21:56:57.277873 kernel: RPC: Registered tcp transport module. Feb 12 21:56:57.277906 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 12 21:56:57.311989 kubelet[2183]: E0212 21:56:57.311902 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:57.337461 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 12 21:56:57.564969 kernel: NFS: Registering the id_resolver key type Feb 12 21:56:57.565155 kernel: Key type id_resolver registered Feb 12 21:56:57.565192 kernel: Key type id_legacy registered Feb 12 21:56:57.607699 nfsidmap[3693]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 21:56:57.613189 nfsidmap[3694]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 12 21:56:57.839095 env[1715]: time="2024-02-12T21:56:57.839056163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b05449fb-9eff-4d73-aa5e-71bda614e75a,Namespace:default,Attempt:0,}" Feb 12 21:56:57.883618 systemd-networkd[1513]: lxc2db1f3498e5a: Link UP Feb 12 21:56:57.884693 (udev-worker)[3680]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:56:57.885566 (udev-worker)[3691]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:56:57.893462 kernel: eth0: renamed from tmp2e29b Feb 12 21:56:57.901341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:56:57.901474 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2db1f3498e5a: link becomes ready Feb 12 21:56:57.901089 systemd-networkd[1513]: lxc2db1f3498e5a: Gained carrier Feb 12 21:56:58.182482 env[1715]: time="2024-02-12T21:56:58.182392551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:56:58.182482 env[1715]: time="2024-02-12T21:56:58.182447806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:56:58.182742 env[1715]: time="2024-02-12T21:56:58.182464815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:56:58.182742 env[1715]: time="2024-02-12T21:56:58.182611861Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e29b4bc2fda332903eeefcbfb7b1d5b0a24710e7765a0273347ba3915128bc2 pid=3718 runtime=io.containerd.runc.v2 Feb 12 21:56:58.222276 systemd[1]: run-containerd-runc-k8s.io-2e29b4bc2fda332903eeefcbfb7b1d5b0a24710e7765a0273347ba3915128bc2-runc.xohgeg.mount: Deactivated successfully. Feb 12 21:56:58.286215 env[1715]: time="2024-02-12T21:56:58.284662180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b05449fb-9eff-4d73-aa5e-71bda614e75a,Namespace:default,Attempt:0,} returns sandbox id \"2e29b4bc2fda332903eeefcbfb7b1d5b0a24710e7765a0273347ba3915128bc2\"" Feb 12 21:56:58.287737 env[1715]: time="2024-02-12T21:56:58.287697051Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 12 21:56:58.312204 kubelet[2183]: E0212 21:56:58.312160 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:58.641768 env[1715]: time="2024-02-12T21:56:58.641720104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:58.644279 env[1715]: time="2024-02-12T21:56:58.644238874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:58.646484 env[1715]: time="2024-02-12T21:56:58.646454146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:58.649070 env[1715]: time="2024-02-12T21:56:58.649037054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:56:58.651603 env[1715]: time="2024-02-12T21:56:58.651562544Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 12 21:56:58.656097 env[1715]: time="2024-02-12T21:56:58.656062979Z" level=info msg="CreateContainer within sandbox \"2e29b4bc2fda332903eeefcbfb7b1d5b0a24710e7765a0273347ba3915128bc2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 12 21:56:58.685903 env[1715]: time="2024-02-12T21:56:58.685817365Z" level=info msg="CreateContainer within sandbox \"2e29b4bc2fda332903eeefcbfb7b1d5b0a24710e7765a0273347ba3915128bc2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2b45abea03201a45454977099486ec2e91e773b4e160e752f7e49ac3a432de83\"" Feb 12 21:56:58.688616 env[1715]: time="2024-02-12T21:56:58.688575427Z" level=info msg="StartContainer for \"2b45abea03201a45454977099486ec2e91e773b4e160e752f7e49ac3a432de83\"" Feb 12 21:56:58.752059 env[1715]: time="2024-02-12T21:56:58.752010500Z" level=info msg="StartContainer for \"2b45abea03201a45454977099486ec2e91e773b4e160e752f7e49ac3a432de83\" returns successfully" Feb 12 21:56:59.235093 systemd-networkd[1513]: lxc2db1f3498e5a: Gained IPv6LL Feb 12 21:56:59.312523 kubelet[2183]: E0212 21:56:59.312488 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:56:59.751184 kubelet[2183]: I0212 21:56:59.751152 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372017103664e+09 pod.CreationTimestamp="2024-02-12 21:56:40 +0000 UTC" firstStartedPulling="2024-02-12 21:56:58.287272291 +0000 UTC m=+86.564720125" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:56:59.751021107 +0000 UTC m=+88.028468946" watchObservedRunningTime="2024-02-12 21:56:59.751110516 +0000 UTC m=+88.028558358" Feb 12 21:57:00.314055 kubelet[2183]: E0212 21:57:00.314001 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:01.315090 kubelet[2183]: E0212 21:57:01.315038 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:02.315877 kubelet[2183]: E0212 21:57:02.315818 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:03.316369 kubelet[2183]: E0212 21:57:03.316323 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:04.316838 kubelet[2183]: E0212 21:57:04.316783 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:04.985235 env[1715]: time="2024-02-12T21:57:04.985170958Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:57:04.991520 env[1715]: time="2024-02-12T21:57:04.991477436Z" level=info msg="StopContainer for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" with timeout 1 (s)" Feb 12 21:57:04.991793 env[1715]: time="2024-02-12T21:57:04.991752254Z" level=info msg="Stop container \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" with signal terminated" Feb 12 21:57:05.000705 systemd-networkd[1513]: lxc_health: Link DOWN Feb 12 21:57:05.000714 systemd-networkd[1513]: lxc_health: Lost carrier Feb 12 21:57:05.136091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055-rootfs.mount: Deactivated successfully. Feb 12 21:57:05.164361 env[1715]: time="2024-02-12T21:57:05.164303498Z" level=info msg="shim disconnected" id=71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055 Feb 12 21:57:05.164361 env[1715]: time="2024-02-12T21:57:05.164356523Z" level=warning msg="cleaning up after shim disconnected" id=71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055 namespace=k8s.io Feb 12 21:57:05.164361 env[1715]: time="2024-02-12T21:57:05.164369100Z" level=info msg="cleaning up dead shim" Feb 12 21:57:05.173514 env[1715]: time="2024-02-12T21:57:05.173459077Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3850 runtime=io.containerd.runc.v2\n" Feb 12 21:57:05.175945 env[1715]: time="2024-02-12T21:57:05.175863499Z" level=info msg="StopContainer for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" returns successfully" Feb 12 21:57:05.176686 env[1715]: time="2024-02-12T21:57:05.176641375Z" level=info msg="StopPodSandbox for \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\"" Feb 12 21:57:05.176800 env[1715]: time="2024-02-12T21:57:05.176717258Z" level=info msg="Container to stop \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.176800 env[1715]: time="2024-02-12T21:57:05.176737485Z" level=info msg="Container to stop \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.176800 env[1715]: time="2024-02-12T21:57:05.176755374Z" level=info msg="Container to stop \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.176800 env[1715]: time="2024-02-12T21:57:05.176771697Z" level=info msg="Container to stop \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.176800 env[1715]: time="2024-02-12T21:57:05.176788048Z" level=info msg="Container to stop \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.178683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097-shm.mount: Deactivated successfully. Feb 12 21:57:05.218284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097-rootfs.mount: Deactivated successfully. Feb 12 21:57:05.232519 env[1715]: time="2024-02-12T21:57:05.232475574Z" level=info msg="shim disconnected" id=f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097 Feb 12 21:57:05.232764 env[1715]: time="2024-02-12T21:57:05.232733409Z" level=warning msg="cleaning up after shim disconnected" id=f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097 namespace=k8s.io Feb 12 21:57:05.232764 env[1715]: time="2024-02-12T21:57:05.232753638Z" level=info msg="cleaning up dead shim" Feb 12 21:57:05.244371 env[1715]: time="2024-02-12T21:57:05.243081492Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3885 runtime=io.containerd.runc.v2\n" Feb 12 21:57:05.244371 env[1715]: time="2024-02-12T21:57:05.243424546Z" level=info msg="TearDown network for sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" successfully" Feb 12 21:57:05.244371 env[1715]: time="2024-02-12T21:57:05.243469263Z" level=info msg="StopPodSandbox for \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" returns successfully" Feb 12 21:57:05.317998 kubelet[2183]: E0212 21:57:05.317946 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:05.429908 kubelet[2183]: I0212 21:57:05.429826 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-run\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.429908 kubelet[2183]: I0212 21:57:05.429879 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-lib-modules\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.429908 kubelet[2183]: I0212 21:57:05.429909 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-kernel\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.429940 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-etc-cni-netd\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.429971 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-config-path\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.429994 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hostproc\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430016 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-net\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430039 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-xtables-lock\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430067 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgshz\" (UniqueName: \"kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-kube-api-access-zgshz\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430095 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/249d53c4-cc19-4f8f-9e1d-d212a04ea722-clustermesh-secrets\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430122 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-cgroup\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430148 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-bpf-maps\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430174 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cni-path\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430199 kubelet[2183]: I0212 21:57:05.430202 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hubble-tls\") pod \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\" (UID: \"249d53c4-cc19-4f8f-9e1d-d212a04ea722\") " Feb 12 21:57:05.430829 kubelet[2183]: I0212 21:57:05.430745 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.430829 kubelet[2183]: I0212 21:57:05.430798 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.430829 kubelet[2183]: I0212 21:57:05.430822 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.431023 kubelet[2183]: I0212 21:57:05.430842 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.431023 kubelet[2183]: I0212 21:57:05.430863 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.431153 kubelet[2183]: W0212 21:57:05.431015 2183 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/249d53c4-cc19-4f8f-9e1d-d212a04ea722/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:57:05.433537 kubelet[2183]: I0212 21:57:05.433505 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.434269 kubelet[2183]: I0212 21:57:05.434187 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:57:05.434361 kubelet[2183]: I0212 21:57:05.434217 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hostproc" (OuterVolumeSpecName: "hostproc") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.434361 kubelet[2183]: I0212 21:57:05.434300 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.434662 kubelet[2183]: I0212 21:57:05.434640 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.434749 kubelet[2183]: I0212 21:57:05.434678 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cni-path" (OuterVolumeSpecName: "cni-path") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.438050 systemd[1]: var-lib-kubelet-pods-249d53c4\x2dcc19\x2d4f8f\x2d9e1d\x2dd212a04ea722-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:57:05.444288 kubelet[2183]: I0212 21:57:05.443748 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-kube-api-access-zgshz" (OuterVolumeSpecName: "kube-api-access-zgshz") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "kube-api-access-zgshz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:05.444288 kubelet[2183]: I0212 21:57:05.444257 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:05.444473 kubelet[2183]: I0212 21:57:05.444331 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249d53c4-cc19-4f8f-9e1d-d212a04ea722-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "249d53c4-cc19-4f8f-9e1d-d212a04ea722" (UID: "249d53c4-cc19-4f8f-9e1d-d212a04ea722"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:57:05.531263 kubelet[2183]: I0212 21:57:05.531149 2183 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-etc-cni-netd\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531263 kubelet[2183]: I0212 21:57:05.531188 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-config-path\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531263 kubelet[2183]: I0212 21:57:05.531202 2183 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hostproc\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531263 kubelet[2183]: I0212 21:57:05.531216 2183 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-net\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531263 kubelet[2183]: I0212 21:57:05.531231 2183 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-xtables-lock\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531263 kubelet[2183]: I0212 21:57:05.531243 2183 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-hubble-tls\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531810 kubelet[2183]: I0212 21:57:05.531795 2183 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-zgshz\" (UniqueName: \"kubernetes.io/projected/249d53c4-cc19-4f8f-9e1d-d212a04ea722-kube-api-access-zgshz\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.531918 kubelet[2183]: I0212 21:57:05.531909 2183 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/249d53c4-cc19-4f8f-9e1d-d212a04ea722-clustermesh-secrets\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.532020 kubelet[2183]: I0212 21:57:05.532012 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-cgroup\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.532114 kubelet[2183]: I0212 21:57:05.532106 2183 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-bpf-maps\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.532214 kubelet[2183]: I0212 21:57:05.532206 2183 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cni-path\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.532306 kubelet[2183]: I0212 21:57:05.532299 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-cilium-run\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.532396 kubelet[2183]: I0212 21:57:05.532388 2183 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-lib-modules\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.532497 kubelet[2183]: I0212 21:57:05.532489 2183 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/249d53c4-cc19-4f8f-9e1d-d212a04ea722-host-proc-sys-kernel\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:05.758479 kubelet[2183]: I0212 21:57:05.758424 2183 scope.go:115] "RemoveContainer" containerID="71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055" Feb 12 21:57:05.768053 env[1715]: time="2024-02-12T21:57:05.768007379Z" level=info msg="RemoveContainer for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\"" Feb 12 21:57:05.774592 env[1715]: time="2024-02-12T21:57:05.774503299Z" level=info msg="RemoveContainer for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" returns successfully" Feb 12 21:57:05.774916 kubelet[2183]: I0212 21:57:05.774892 2183 scope.go:115] "RemoveContainer" containerID="bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f" Feb 12 21:57:05.776366 env[1715]: time="2024-02-12T21:57:05.776332390Z" level=info msg="RemoveContainer for \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\"" Feb 12 21:57:05.780456 env[1715]: time="2024-02-12T21:57:05.780411683Z" level=info msg="RemoveContainer for \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\" returns successfully" Feb 12 21:57:05.780848 kubelet[2183]: I0212 21:57:05.780765 2183 scope.go:115] "RemoveContainer" containerID="82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43" Feb 12 21:57:05.782776 env[1715]: time="2024-02-12T21:57:05.782411955Z" level=info msg="RemoveContainer for \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\"" Feb 12 21:57:05.788485 env[1715]: time="2024-02-12T21:57:05.788423791Z" level=info msg="RemoveContainer for \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\" returns successfully" Feb 12 21:57:05.788678 kubelet[2183]: I0212 21:57:05.788655 2183 scope.go:115] "RemoveContainer" containerID="73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd" Feb 12 21:57:05.790026 env[1715]: time="2024-02-12T21:57:05.789987502Z" level=info msg="RemoveContainer for \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\"" Feb 12 21:57:05.794813 env[1715]: time="2024-02-12T21:57:05.794774158Z" level=info msg="RemoveContainer for \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\" returns successfully" Feb 12 21:57:05.794986 kubelet[2183]: I0212 21:57:05.794961 2183 scope.go:115] "RemoveContainer" containerID="7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c" Feb 12 21:57:05.796635 env[1715]: time="2024-02-12T21:57:05.796608151Z" level=info msg="RemoveContainer for \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\"" Feb 12 21:57:05.800428 env[1715]: time="2024-02-12T21:57:05.800392083Z" level=info msg="RemoveContainer for \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\" returns successfully" Feb 12 21:57:05.800624 kubelet[2183]: I0212 21:57:05.800601 2183 scope.go:115] "RemoveContainer" containerID="71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055" Feb 12 21:57:05.801022 env[1715]: time="2024-02-12T21:57:05.800952336Z" level=error msg="ContainerStatus for \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\": not found" Feb 12 21:57:05.801152 kubelet[2183]: E0212 21:57:05.801132 2183 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\": not found" containerID="71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055" Feb 12 21:57:05.801237 kubelet[2183]: I0212 21:57:05.801178 2183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055} err="failed to get container status \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\": rpc error: code = NotFound desc = an error occurred when try to find container \"71f4edabb110ce6d53da3fbd90c82e1f602d2db807b1251b0336250d683e9055\": not found" Feb 12 21:57:05.801237 kubelet[2183]: I0212 21:57:05.801194 2183 scope.go:115] "RemoveContainer" containerID="bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f" Feb 12 21:57:05.801446 env[1715]: time="2024-02-12T21:57:05.801379371Z" level=error msg="ContainerStatus for \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\": not found" Feb 12 21:57:05.801565 kubelet[2183]: E0212 21:57:05.801545 2183 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\": not found" containerID="bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f" Feb 12 21:57:05.801639 kubelet[2183]: I0212 21:57:05.801584 2183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f} err="failed to get container status \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf32189f4b2ca7ceb06295ef62002e46b6042e4e1829c2bc1537043f7017a34f\": not found" Feb 12 21:57:05.801639 kubelet[2183]: I0212 21:57:05.801600 2183 scope.go:115] "RemoveContainer" containerID="82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43" Feb 12 21:57:05.801814 env[1715]: time="2024-02-12T21:57:05.801765034Z" level=error msg="ContainerStatus for \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\": not found" Feb 12 21:57:05.802101 kubelet[2183]: E0212 21:57:05.801901 2183 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\": not found" containerID="82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43" Feb 12 21:57:05.802177 kubelet[2183]: I0212 21:57:05.802106 2183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43} err="failed to get container status \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\": rpc error: code = NotFound desc = an error occurred when try to find container \"82b08ee2964e99e58c2dd43cca4ab18b49b931bafa527fdfa5d11bbe78693b43\": not found" Feb 12 21:57:05.802177 kubelet[2183]: I0212 21:57:05.802123 2183 scope.go:115] "RemoveContainer" containerID="73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd" Feb 12 21:57:05.802347 env[1715]: time="2024-02-12T21:57:05.802299800Z" level=error msg="ContainerStatus for \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\": not found" Feb 12 21:57:05.802459 kubelet[2183]: E0212 21:57:05.802444 2183 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\": not found" containerID="73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd" Feb 12 21:57:05.802527 kubelet[2183]: I0212 21:57:05.802481 2183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd} err="failed to get container status \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"73e00bed20b42dcf5427ad2c09100e3067aaaf2d031b4d72f7d782e29a4465cd\": not found" Feb 12 21:57:05.802527 kubelet[2183]: I0212 21:57:05.802495 2183 scope.go:115] "RemoveContainer" containerID="7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c" Feb 12 21:57:05.802755 env[1715]: time="2024-02-12T21:57:05.802703213Z" level=error msg="ContainerStatus for \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\": not found" Feb 12 21:57:05.802884 kubelet[2183]: E0212 21:57:05.802865 2183 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\": not found" containerID="7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c" Feb 12 21:57:05.802960 kubelet[2183]: I0212 21:57:05.802896 2183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c} err="failed to get container status \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7580f797ccda6b696af68aaaba750a7f85f72c68200b7e9de76256556244042c\": not found" Feb 12 21:57:05.965417 systemd[1]: var-lib-kubelet-pods-249d53c4\x2dcc19\x2d4f8f\x2d9e1d\x2dd212a04ea722-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzgshz.mount: Deactivated successfully. Feb 12 21:57:05.965628 systemd[1]: var-lib-kubelet-pods-249d53c4\x2dcc19\x2d4f8f\x2d9e1d\x2dd212a04ea722-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:57:06.319143 kubelet[2183]: E0212 21:57:06.319090 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:06.536141 kubelet[2183]: I0212 21:57:06.536103 2183 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=249d53c4-cc19-4f8f-9e1d-d212a04ea722 path="/var/lib/kubelet/pods/249d53c4-cc19-4f8f-9e1d-d212a04ea722/volumes" Feb 12 21:57:07.319912 kubelet[2183]: E0212 21:57:07.319854 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:07.398102 kubelet[2183]: E0212 21:57:07.398063 2183 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:57:08.094753 kubelet[2183]: I0212 21:57:08.094716 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:57:08.094936 kubelet[2183]: E0212 21:57:08.094779 2183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="249d53c4-cc19-4f8f-9e1d-d212a04ea722" containerName="mount-cgroup" Feb 12 21:57:08.094936 kubelet[2183]: E0212 21:57:08.094793 2183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="249d53c4-cc19-4f8f-9e1d-d212a04ea722" containerName="apply-sysctl-overwrites" Feb 12 21:57:08.094936 kubelet[2183]: E0212 21:57:08.094802 2183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="249d53c4-cc19-4f8f-9e1d-d212a04ea722" containerName="mount-bpf-fs" Feb 12 21:57:08.094936 kubelet[2183]: E0212 21:57:08.094810 2183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="249d53c4-cc19-4f8f-9e1d-d212a04ea722" containerName="clean-cilium-state" Feb 12 21:57:08.094936 kubelet[2183]: E0212 21:57:08.094818 2183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="249d53c4-cc19-4f8f-9e1d-d212a04ea722" containerName="cilium-agent" Feb 12 21:57:08.094936 kubelet[2183]: I0212 21:57:08.094845 2183 memory_manager.go:346] "RemoveStaleState removing state" podUID="249d53c4-cc19-4f8f-9e1d-d212a04ea722" containerName="cilium-agent" Feb 12 21:57:08.146954 kubelet[2183]: I0212 21:57:08.146910 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:57:08.250059 kubelet[2183]: I0212 21:57:08.250024 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-cgroup\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250289 kubelet[2183]: I0212 21:57:08.250269 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-bpf-maps\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250388 kubelet[2183]: I0212 21:57:08.250307 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-hostproc\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250388 kubelet[2183]: I0212 21:57:08.250342 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42lx4\" (UniqueName: \"kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-kube-api-access-42lx4\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250388 kubelet[2183]: I0212 21:57:08.250371 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-lib-modules\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250534 kubelet[2183]: I0212 21:57:08.250404 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-ipsec-secrets\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250534 kubelet[2183]: I0212 21:57:08.250461 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4aac9982-6484-4a78-b658-ae8318032db2-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-mpfb9\" (UID: \"4aac9982-6484-4a78-b658-ae8318032db2\") " pod="kube-system/cilium-operator-f59cbd8c6-mpfb9" Feb 12 21:57:08.250534 kubelet[2183]: I0212 21:57:08.250497 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cni-path\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250534 kubelet[2183]: I0212 21:57:08.250528 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-etc-cni-netd\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250715 kubelet[2183]: I0212 21:57:08.250562 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-xtables-lock\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250715 kubelet[2183]: I0212 21:57:08.250598 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-clustermesh-secrets\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250715 kubelet[2183]: I0212 21:57:08.250632 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-config-path\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250715 kubelet[2183]: I0212 21:57:08.250667 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-run\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250715 kubelet[2183]: I0212 21:57:08.250700 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-kernel\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250979 kubelet[2183]: I0212 21:57:08.250730 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-hubble-tls\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.250979 kubelet[2183]: I0212 21:57:08.250766 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qgq\" (UniqueName: \"kubernetes.io/projected/4aac9982-6484-4a78-b658-ae8318032db2-kube-api-access-87qgq\") pod \"cilium-operator-f59cbd8c6-mpfb9\" (UID: \"4aac9982-6484-4a78-b658-ae8318032db2\") " pod="kube-system/cilium-operator-f59cbd8c6-mpfb9" Feb 12 21:57:08.250979 kubelet[2183]: I0212 21:57:08.250800 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-net\") pod \"cilium-4v2kf\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " pod="kube-system/cilium-4v2kf" Feb 12 21:57:08.320526 kubelet[2183]: E0212 21:57:08.320482 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:08.451091 env[1715]: time="2024-02-12T21:57:08.451045119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-mpfb9,Uid:4aac9982-6484-4a78-b658-ae8318032db2,Namespace:kube-system,Attempt:0,}" Feb 12 21:57:08.468289 env[1715]: time="2024-02-12T21:57:08.468060952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:57:08.468289 env[1715]: time="2024-02-12T21:57:08.468107435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:57:08.468289 env[1715]: time="2024-02-12T21:57:08.468125080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:57:08.468686 env[1715]: time="2024-02-12T21:57:08.468327289Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b764a7f54e1ab7d5848588d00d9630d996dd9044b2337a2bc9220c655a378e63 pid=3914 runtime=io.containerd.runc.v2 Feb 12 21:57:08.539804 env[1715]: time="2024-02-12T21:57:08.539770154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-mpfb9,Uid:4aac9982-6484-4a78-b658-ae8318032db2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b764a7f54e1ab7d5848588d00d9630d996dd9044b2337a2bc9220c655a378e63\"" Feb 12 21:57:08.544213 env[1715]: time="2024-02-12T21:57:08.544179398Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 21:57:08.698637 env[1715]: time="2024-02-12T21:57:08.698592647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4v2kf,Uid:adecd920-4ebd-4244-8557-d3bfa3c22c81,Namespace:kube-system,Attempt:0,}" Feb 12 21:57:08.716580 env[1715]: time="2024-02-12T21:57:08.716146923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:57:08.716752 env[1715]: time="2024-02-12T21:57:08.716197191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:57:08.716752 env[1715]: time="2024-02-12T21:57:08.716213897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:57:08.716752 env[1715]: time="2024-02-12T21:57:08.716368482Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c pid=3957 runtime=io.containerd.runc.v2 Feb 12 21:57:08.764824 env[1715]: time="2024-02-12T21:57:08.764759461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4v2kf,Uid:adecd920-4ebd-4244-8557-d3bfa3c22c81,Namespace:kube-system,Attempt:0,} returns sandbox id \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\"" Feb 12 21:57:08.768699 env[1715]: time="2024-02-12T21:57:08.768640456Z" level=info msg="CreateContainer within sandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:57:08.787686 env[1715]: time="2024-02-12T21:57:08.787633706Z" level=info msg="CreateContainer within sandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d\"" Feb 12 21:57:08.788823 env[1715]: time="2024-02-12T21:57:08.788766726Z" level=info msg="StartContainer for \"80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d\"" Feb 12 21:57:08.843780 env[1715]: time="2024-02-12T21:57:08.843735112Z" level=info msg="StartContainer for \"80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d\" returns successfully" Feb 12 21:57:08.925215 env[1715]: time="2024-02-12T21:57:08.925163111Z" level=info msg="shim disconnected" id=80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d Feb 12 21:57:08.925215 env[1715]: time="2024-02-12T21:57:08.925215589Z" level=warning msg="cleaning up after shim disconnected" id=80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d namespace=k8s.io Feb 12 21:57:08.925677 env[1715]: time="2024-02-12T21:57:08.925228411Z" level=info msg="cleaning up dead shim" Feb 12 21:57:08.936144 env[1715]: time="2024-02-12T21:57:08.936098880Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Feb 12 21:57:09.321072 kubelet[2183]: E0212 21:57:09.320959 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:09.773703 env[1715]: time="2024-02-12T21:57:09.773648972Z" level=info msg="StopPodSandbox for \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\"" Feb 12 21:57:09.774373 env[1715]: time="2024-02-12T21:57:09.774330337Z" level=info msg="Container to stop \"80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:09.782671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c-shm.mount: Deactivated successfully. Feb 12 21:57:09.824545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c-rootfs.mount: Deactivated successfully. Feb 12 21:57:09.843686 env[1715]: time="2024-02-12T21:57:09.843615149Z" level=info msg="shim disconnected" id=923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c Feb 12 21:57:09.843686 env[1715]: time="2024-02-12T21:57:09.843673796Z" level=warning msg="cleaning up after shim disconnected" id=923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c namespace=k8s.io Feb 12 21:57:09.843686 env[1715]: time="2024-02-12T21:57:09.843686475Z" level=info msg="cleaning up dead shim" Feb 12 21:57:09.860335 env[1715]: time="2024-02-12T21:57:09.860288965Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4074 runtime=io.containerd.runc.v2\n" Feb 12 21:57:09.861182 env[1715]: time="2024-02-12T21:57:09.861137345Z" level=info msg="TearDown network for sandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" successfully" Feb 12 21:57:09.861330 env[1715]: time="2024-02-12T21:57:09.861309449Z" level=info msg="StopPodSandbox for \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" returns successfully" Feb 12 21:57:09.940961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755501967.mount: Deactivated successfully. Feb 12 21:57:09.962464 kubelet[2183]: I0212 21:57:09.962397 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-ipsec-secrets\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963033 kubelet[2183]: I0212 21:57:09.962995 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-hubble-tls\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963117 kubelet[2183]: I0212 21:57:09.963040 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-net\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963117 kubelet[2183]: I0212 21:57:09.963070 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cni-path\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963117 kubelet[2183]: I0212 21:57:09.963102 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-clustermesh-secrets\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963272 kubelet[2183]: I0212 21:57:09.963131 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-kernel\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963272 kubelet[2183]: I0212 21:57:09.963158 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-hostproc\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963272 kubelet[2183]: I0212 21:57:09.963193 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-config-path\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963272 kubelet[2183]: I0212 21:57:09.963220 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-cgroup\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963272 kubelet[2183]: I0212 21:57:09.963247 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-bpf-maps\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963967 kubelet[2183]: I0212 21:57:09.963279 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42lx4\" (UniqueName: \"kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-kube-api-access-42lx4\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963967 kubelet[2183]: I0212 21:57:09.963308 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-lib-modules\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963967 kubelet[2183]: I0212 21:57:09.963334 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-etc-cni-netd\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963967 kubelet[2183]: I0212 21:57:09.963362 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-xtables-lock\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963967 kubelet[2183]: I0212 21:57:09.963389 2183 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-run\") pod \"adecd920-4ebd-4244-8557-d3bfa3c22c81\" (UID: \"adecd920-4ebd-4244-8557-d3bfa3c22c81\") " Feb 12 21:57:09.963967 kubelet[2183]: I0212 21:57:09.963919 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.964692 kubelet[2183]: W0212 21:57:09.964606 2183 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/adecd920-4ebd-4244-8557-d3bfa3c22c81/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:57:09.967461 kubelet[2183]: I0212 21:57:09.967411 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:57:09.967623 kubelet[2183]: I0212 21:57:09.967484 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.967623 kubelet[2183]: I0212 21:57:09.967517 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cni-path" (OuterVolumeSpecName: "cni-path") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.968500 kubelet[2183]: I0212 21:57:09.968072 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.968500 kubelet[2183]: I0212 21:57:09.968155 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-hostproc" (OuterVolumeSpecName: "hostproc") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.970079 kubelet[2183]: I0212 21:57:09.969505 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.970079 kubelet[2183]: I0212 21:57:09.969571 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.970079 kubelet[2183]: I0212 21:57:09.969897 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.970079 kubelet[2183]: I0212 21:57:09.969954 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.970568 kubelet[2183]: I0212 21:57:09.970458 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.986319 kubelet[2183]: I0212 21:57:09.986120 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:09.986319 kubelet[2183]: I0212 21:57:09.986283 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-kube-api-access-42lx4" (OuterVolumeSpecName: "kube-api-access-42lx4") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "kube-api-access-42lx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:09.987491 kubelet[2183]: I0212 21:57:09.987414 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:57:09.987685 kubelet[2183]: I0212 21:57:09.987660 2183 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "adecd920-4ebd-4244-8557-d3bfa3c22c81" (UID: "adecd920-4ebd-4244-8557-d3bfa3c22c81"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063800 2183 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cni-path\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063844 2183 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-clustermesh-secrets\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063863 2183 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-kernel\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063878 2183 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-hostproc\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063896 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-config-path\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063911 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-cgroup\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063926 2183 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-bpf-maps\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063941 2183 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-42lx4\" (UniqueName: \"kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-kube-api-access-42lx4\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063957 2183 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-lib-modules\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063971 2183 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-etc-cni-netd\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.063986 2183 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-xtables-lock\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.064001 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-run\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.064016 2183 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/adecd920-4ebd-4244-8557-d3bfa3c22c81-cilium-ipsec-secrets\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.064031 2183 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adecd920-4ebd-4244-8557-d3bfa3c22c81-hubble-tls\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.064142 kubelet[2183]: I0212 21:57:10.064047 2183 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adecd920-4ebd-4244-8557-d3bfa3c22c81-host-proc-sys-net\") on node \"172.31.23.213\" DevicePath \"\"" Feb 12 21:57:10.321327 kubelet[2183]: E0212 21:57:10.321217 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:10.364811 systemd[1]: var-lib-kubelet-pods-adecd920\x2d4ebd\x2d4244\x2d8557\x2dd3bfa3c22c81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d42lx4.mount: Deactivated successfully. Feb 12 21:57:10.365165 systemd[1]: var-lib-kubelet-pods-adecd920\x2d4ebd\x2d4244\x2d8557\x2dd3bfa3c22c81-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:57:10.365361 systemd[1]: var-lib-kubelet-pods-adecd920\x2d4ebd\x2d4244\x2d8557\x2dd3bfa3c22c81-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:57:10.365506 systemd[1]: var-lib-kubelet-pods-adecd920\x2d4ebd\x2d4244\x2d8557\x2dd3bfa3c22c81-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 21:57:10.776675 kubelet[2183]: I0212 21:57:10.776646 2183 scope.go:115] "RemoveContainer" containerID="80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d" Feb 12 21:57:10.778986 env[1715]: time="2024-02-12T21:57:10.778942681Z" level=info msg="RemoveContainer for \"80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d\"" Feb 12 21:57:10.784948 env[1715]: time="2024-02-12T21:57:10.784902503Z" level=info msg="RemoveContainer for \"80b081df60a52c88463dd2283c44823e336e7b7d818861d3a3d2aa3582cde59d\" returns successfully" Feb 12 21:57:10.820640 kubelet[2183]: I0212 21:57:10.820604 2183 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:57:10.820826 kubelet[2183]: E0212 21:57:10.820681 2183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="adecd920-4ebd-4244-8557-d3bfa3c22c81" containerName="mount-cgroup" Feb 12 21:57:10.820826 kubelet[2183]: I0212 21:57:10.820710 2183 memory_manager.go:346] "RemoveStaleState removing state" podUID="adecd920-4ebd-4244-8557-d3bfa3c22c81" containerName="mount-cgroup" Feb 12 21:57:10.969845 kubelet[2183]: I0212 21:57:10.969809 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-bpf-maps\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970026 kubelet[2183]: I0212 21:57:10.969876 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-xtables-lock\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970026 kubelet[2183]: I0212 21:57:10.969904 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cc957bec-2e2d-44a2-824e-9fe38d693a2b-clustermesh-secrets\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970026 kubelet[2183]: I0212 21:57:10.969947 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cc957bec-2e2d-44a2-824e-9fe38d693a2b-cilium-ipsec-secrets\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970026 kubelet[2183]: I0212 21:57:10.969975 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-host-proc-sys-net\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970026 kubelet[2183]: I0212 21:57:10.970016 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8tf\" (UniqueName: \"kubernetes.io/projected/cc957bec-2e2d-44a2-824e-9fe38d693a2b-kube-api-access-fl8tf\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970266 kubelet[2183]: I0212 21:57:10.970050 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-cilium-run\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970266 kubelet[2183]: I0212 21:57:10.970095 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-hostproc\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970266 kubelet[2183]: I0212 21:57:10.970128 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-cilium-cgroup\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970266 kubelet[2183]: I0212 21:57:10.970172 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-cni-path\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970266 kubelet[2183]: I0212 21:57:10.970204 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-host-proc-sys-kernel\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970266 kubelet[2183]: I0212 21:57:10.970253 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-etc-cni-netd\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970555 kubelet[2183]: I0212 21:57:10.970284 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc957bec-2e2d-44a2-824e-9fe38d693a2b-lib-modules\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970555 kubelet[2183]: I0212 21:57:10.970412 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc957bec-2e2d-44a2-824e-9fe38d693a2b-cilium-config-path\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:10.970791 kubelet[2183]: I0212 21:57:10.970505 2183 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cc957bec-2e2d-44a2-824e-9fe38d693a2b-hubble-tls\") pod \"cilium-g5gw9\" (UID: \"cc957bec-2e2d-44a2-824e-9fe38d693a2b\") " pod="kube-system/cilium-g5gw9" Feb 12 21:57:11.043875 env[1715]: time="2024-02-12T21:57:11.042732709Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:57:11.046013 env[1715]: time="2024-02-12T21:57:11.045975362Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:57:11.048059 env[1715]: time="2024-02-12T21:57:11.048029832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:57:11.048507 env[1715]: time="2024-02-12T21:57:11.048473806Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 21:57:11.050720 env[1715]: time="2024-02-12T21:57:11.050690149Z" level=info msg="CreateContainer within sandbox \"b764a7f54e1ab7d5848588d00d9630d996dd9044b2337a2bc9220c655a378e63\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 21:57:11.068382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619372681.mount: Deactivated successfully. Feb 12 21:57:11.070081 env[1715]: time="2024-02-12T21:57:11.070033315Z" level=info msg="CreateContainer within sandbox \"b764a7f54e1ab7d5848588d00d9630d996dd9044b2337a2bc9220c655a378e63\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4fbe33581ccc7648e201d7de986bc132644f6fa63aae8035000f0c099564fc0a\"" Feb 12 21:57:11.072733 env[1715]: time="2024-02-12T21:57:11.072695756Z" level=info msg="StartContainer for \"4fbe33581ccc7648e201d7de986bc132644f6fa63aae8035000f0c099564fc0a\"" Feb 12 21:57:11.136483 env[1715]: time="2024-02-12T21:57:11.135280651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g5gw9,Uid:cc957bec-2e2d-44a2-824e-9fe38d693a2b,Namespace:kube-system,Attempt:0,}" Feb 12 21:57:11.151969 env[1715]: time="2024-02-12T21:57:11.151923778Z" level=info msg="StartContainer for \"4fbe33581ccc7648e201d7de986bc132644f6fa63aae8035000f0c099564fc0a\" returns successfully" Feb 12 21:57:11.168471 env[1715]: time="2024-02-12T21:57:11.167645504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:57:11.168471 env[1715]: time="2024-02-12T21:57:11.167736776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:57:11.168471 env[1715]: time="2024-02-12T21:57:11.167769727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:57:11.168471 env[1715]: time="2024-02-12T21:57:11.167937273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef pid=4140 runtime=io.containerd.runc.v2 Feb 12 21:57:11.240369 env[1715]: time="2024-02-12T21:57:11.240260956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g5gw9,Uid:cc957bec-2e2d-44a2-824e-9fe38d693a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\"" Feb 12 21:57:11.243897 env[1715]: time="2024-02-12T21:57:11.243861835Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:57:11.322307 kubelet[2183]: E0212 21:57:11.322035 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:11.419349 env[1715]: time="2024-02-12T21:57:11.419306552Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54a25bd6ced587d714158cae690786058bd2a8826fd299090b430addf3b77aa1\"" Feb 12 21:57:11.420306 env[1715]: time="2024-02-12T21:57:11.420265283Z" level=info msg="StartContainer for \"54a25bd6ced587d714158cae690786058bd2a8826fd299090b430addf3b77aa1\"" Feb 12 21:57:11.527396 env[1715]: time="2024-02-12T21:57:11.526915545Z" level=info msg="StartContainer for \"54a25bd6ced587d714158cae690786058bd2a8826fd299090b430addf3b77aa1\" returns successfully" Feb 12 21:57:11.565902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a25bd6ced587d714158cae690786058bd2a8826fd299090b430addf3b77aa1-rootfs.mount: Deactivated successfully. Feb 12 21:57:11.578040 env[1715]: time="2024-02-12T21:57:11.577914288Z" level=info msg="shim disconnected" id=54a25bd6ced587d714158cae690786058bd2a8826fd299090b430addf3b77aa1 Feb 12 21:57:11.578040 env[1715]: time="2024-02-12T21:57:11.577968181Z" level=warning msg="cleaning up after shim disconnected" id=54a25bd6ced587d714158cae690786058bd2a8826fd299090b430addf3b77aa1 namespace=k8s.io Feb 12 21:57:11.578770 env[1715]: time="2024-02-12T21:57:11.578629424Z" level=info msg="cleaning up dead shim" Feb 12 21:57:11.593589 env[1715]: time="2024-02-12T21:57:11.593541485Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4225 runtime=io.containerd.runc.v2\n" Feb 12 21:57:11.788624 env[1715]: time="2024-02-12T21:57:11.788584078Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:57:11.793078 kubelet[2183]: I0212 21:57:11.792848 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-mpfb9" podStartSLOduration=-9.223372033061977e+09 pod.CreationTimestamp="2024-02-12 21:57:08 +0000 UTC" firstStartedPulling="2024-02-12 21:57:08.543641718 +0000 UTC m=+96.821089542" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:57:11.792174923 +0000 UTC m=+100.069622768" watchObservedRunningTime="2024-02-12 21:57:11.792799034 +0000 UTC m=+100.070246878" Feb 12 21:57:11.814683 env[1715]: time="2024-02-12T21:57:11.814637266Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da47c0138a5d761ed9a37c6b39683e926f5c7ce73120a83d9ce8d2b26d7a6dc6\"" Feb 12 21:57:11.815317 env[1715]: time="2024-02-12T21:57:11.815204756Z" level=info msg="StartContainer for \"da47c0138a5d761ed9a37c6b39683e926f5c7ce73120a83d9ce8d2b26d7a6dc6\"" Feb 12 21:57:11.877250 env[1715]: time="2024-02-12T21:57:11.877136377Z" level=info msg="StartContainer for \"da47c0138a5d761ed9a37c6b39683e926f5c7ce73120a83d9ce8d2b26d7a6dc6\" returns successfully" Feb 12 21:57:11.921759 env[1715]: time="2024-02-12T21:57:11.921705586Z" level=info msg="shim disconnected" id=da47c0138a5d761ed9a37c6b39683e926f5c7ce73120a83d9ce8d2b26d7a6dc6 Feb 12 21:57:11.921759 env[1715]: time="2024-02-12T21:57:11.921758055Z" level=warning msg="cleaning up after shim disconnected" id=da47c0138a5d761ed9a37c6b39683e926f5c7ce73120a83d9ce8d2b26d7a6dc6 namespace=k8s.io Feb 12 21:57:11.922121 env[1715]: time="2024-02-12T21:57:11.921770789Z" level=info msg="cleaning up dead shim" Feb 12 21:57:11.934677 env[1715]: time="2024-02-12T21:57:11.934631038Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4288 runtime=io.containerd.runc.v2\n" Feb 12 21:57:12.239005 kubelet[2183]: E0212 21:57:12.238955 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:12.322626 kubelet[2183]: E0212 21:57:12.322574 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:12.364080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619870179.mount: Deactivated successfully. Feb 12 21:57:12.399408 kubelet[2183]: E0212 21:57:12.399380 2183 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:57:12.534180 kubelet[2183]: I0212 21:57:12.533733 2183 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=adecd920-4ebd-4244-8557-d3bfa3c22c81 path="/var/lib/kubelet/pods/adecd920-4ebd-4244-8557-d3bfa3c22c81/volumes" Feb 12 21:57:12.791677 env[1715]: time="2024-02-12T21:57:12.791346725Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:57:12.835902 env[1715]: time="2024-02-12T21:57:12.835840930Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8070399618f8a5bdc1f261db4687ac539f326b9a4513a2c4d892533a07d1b0c\"" Feb 12 21:57:12.836691 env[1715]: time="2024-02-12T21:57:12.836662201Z" level=info msg="StartContainer for \"d8070399618f8a5bdc1f261db4687ac539f326b9a4513a2c4d892533a07d1b0c\"" Feb 12 21:57:12.962165 env[1715]: time="2024-02-12T21:57:12.962114345Z" level=info msg="StartContainer for \"d8070399618f8a5bdc1f261db4687ac539f326b9a4513a2c4d892533a07d1b0c\" returns successfully" Feb 12 21:57:13.006518 env[1715]: time="2024-02-12T21:57:13.006331493Z" level=info msg="shim disconnected" id=d8070399618f8a5bdc1f261db4687ac539f326b9a4513a2c4d892533a07d1b0c Feb 12 21:57:13.006782 env[1715]: time="2024-02-12T21:57:13.006521540Z" level=warning msg="cleaning up after shim disconnected" id=d8070399618f8a5bdc1f261db4687ac539f326b9a4513a2c4d892533a07d1b0c namespace=k8s.io Feb 12 21:57:13.006782 env[1715]: time="2024-02-12T21:57:13.006537226Z" level=info msg="cleaning up dead shim" Feb 12 21:57:13.018361 env[1715]: time="2024-02-12T21:57:13.018309015Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4349 runtime=io.containerd.runc.v2\n" Feb 12 21:57:13.323362 kubelet[2183]: E0212 21:57:13.323310 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:13.364324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8070399618f8a5bdc1f261db4687ac539f326b9a4513a2c4d892533a07d1b0c-rootfs.mount: Deactivated successfully. Feb 12 21:57:13.794828 env[1715]: time="2024-02-12T21:57:13.794779545Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:57:13.821707 env[1715]: time="2024-02-12T21:57:13.821527429Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b5a295abb2bac763a8fb473ba10ef3865e2eeebd261d9d953d0fb9d079c144d\"" Feb 12 21:57:13.822648 env[1715]: time="2024-02-12T21:57:13.822606611Z" level=info msg="StartContainer for \"8b5a295abb2bac763a8fb473ba10ef3865e2eeebd261d9d953d0fb9d079c144d\"" Feb 12 21:57:13.898174 env[1715]: time="2024-02-12T21:57:13.898123904Z" level=info msg="StartContainer for \"8b5a295abb2bac763a8fb473ba10ef3865e2eeebd261d9d953d0fb9d079c144d\" returns successfully" Feb 12 21:57:13.931589 env[1715]: time="2024-02-12T21:57:13.931535741Z" level=info msg="shim disconnected" id=8b5a295abb2bac763a8fb473ba10ef3865e2eeebd261d9d953d0fb9d079c144d Feb 12 21:57:13.931589 env[1715]: time="2024-02-12T21:57:13.931586933Z" level=warning msg="cleaning up after shim disconnected" id=8b5a295abb2bac763a8fb473ba10ef3865e2eeebd261d9d953d0fb9d079c144d namespace=k8s.io Feb 12 21:57:13.932044 env[1715]: time="2024-02-12T21:57:13.931599488Z" level=info msg="cleaning up dead shim" Feb 12 21:57:13.941710 env[1715]: time="2024-02-12T21:57:13.941666034Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4408 runtime=io.containerd.runc.v2\n" Feb 12 21:57:14.324310 kubelet[2183]: E0212 21:57:14.324275 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:14.364669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b5a295abb2bac763a8fb473ba10ef3865e2eeebd261d9d953d0fb9d079c144d-rootfs.mount: Deactivated successfully. Feb 12 21:57:14.799711 env[1715]: time="2024-02-12T21:57:14.799634310Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:57:14.820680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount123710076.mount: Deactivated successfully. Feb 12 21:57:14.834365 env[1715]: time="2024-02-12T21:57:14.834308106Z" level=info msg="CreateContainer within sandbox \"aad0d5030f294839fd4230288e8db600a1e30a08bb69f22e75c79feb75a22fef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3e33e0dd7f031f79b090554228d26b72e43eaaa2798b36c4440d404cb80a10a\"" Feb 12 21:57:14.835014 env[1715]: time="2024-02-12T21:57:14.834974422Z" level=info msg="StartContainer for \"c3e33e0dd7f031f79b090554228d26b72e43eaaa2798b36c4440d404cb80a10a\"" Feb 12 21:57:14.914505 env[1715]: time="2024-02-12T21:57:14.914448228Z" level=info msg="StartContainer for \"c3e33e0dd7f031f79b090554228d26b72e43eaaa2798b36c4440d404cb80a10a\" returns successfully" Feb 12 21:57:15.325041 kubelet[2183]: E0212 21:57:15.324978 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:15.458465 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 21:57:15.819444 kubelet[2183]: I0212 21:57:15.819394 2183 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-g5gw9" podStartSLOduration=5.819360272 pod.CreationTimestamp="2024-02-12 21:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:57:15.8191809 +0000 UTC m=+104.096628742" watchObservedRunningTime="2024-02-12 21:57:15.819360272 +0000 UTC m=+104.096808114" Feb 12 21:57:16.326110 kubelet[2183]: E0212 21:57:16.326055 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:17.131727 kubelet[2183]: I0212 21:57:17.131697 2183 setters.go:548] "Node became not ready" node="172.31.23.213" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 21:57:17.13165391 +0000 UTC m=+105.409101734 LastTransitionTime:2024-02-12 21:57:17.13165391 +0000 UTC m=+105.409101734 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 21:57:17.326916 kubelet[2183]: E0212 21:57:17.326851 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:17.659081 systemd[1]: run-containerd-runc-k8s.io-c3e33e0dd7f031f79b090554228d26b72e43eaaa2798b36c4440d404cb80a10a-runc.Spydyw.mount: Deactivated successfully. Feb 12 21:57:18.236316 (udev-worker)[4975]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:57:18.237555 (udev-worker)[4506]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:57:18.251749 systemd-networkd[1513]: lxc_health: Link UP Feb 12 21:57:18.268460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:57:18.268918 systemd-networkd[1513]: lxc_health: Gained carrier Feb 12 21:57:18.327539 kubelet[2183]: E0212 21:57:18.327491 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:19.266691 systemd-networkd[1513]: lxc_health: Gained IPv6LL Feb 12 21:57:19.328770 kubelet[2183]: E0212 21:57:19.328726 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:19.938371 systemd[1]: run-containerd-runc-k8s.io-c3e33e0dd7f031f79b090554228d26b72e43eaaa2798b36c4440d404cb80a10a-runc.THRKmf.mount: Deactivated successfully. Feb 12 21:57:20.113288 kubelet[2183]: E0212 21:57:20.113128 2183 upgradeaware.go:440] Error proxying data from backend to client: read tcp 127.0.0.1:59506->127.0.0.1:38771: read: connection reset by peer Feb 12 21:57:20.330747 kubelet[2183]: E0212 21:57:20.330509 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:21.331313 kubelet[2183]: E0212 21:57:21.331273 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:22.332586 kubelet[2183]: E0212 21:57:22.332547 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:23.333662 kubelet[2183]: E0212 21:57:23.333501 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:24.335144 kubelet[2183]: E0212 21:57:24.335101 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:25.336191 kubelet[2183]: E0212 21:57:25.336132 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:26.336342 kubelet[2183]: E0212 21:57:26.336290 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:27.336638 kubelet[2183]: E0212 21:57:27.336584 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:28.336967 kubelet[2183]: E0212 21:57:28.336910 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:29.337341 kubelet[2183]: E0212 21:57:29.337289 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:30.337991 kubelet[2183]: E0212 21:57:30.337941 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:31.338168 kubelet[2183]: E0212 21:57:31.338116 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:32.239324 kubelet[2183]: E0212 21:57:32.239274 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:32.264861 env[1715]: time="2024-02-12T21:57:32.264816477Z" level=info msg="StopPodSandbox for \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\"" Feb 12 21:57:32.265311 env[1715]: time="2024-02-12T21:57:32.264917945Z" level=info msg="TearDown network for sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" successfully" Feb 12 21:57:32.265311 env[1715]: time="2024-02-12T21:57:32.264962703Z" level=info msg="StopPodSandbox for \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" returns successfully" Feb 12 21:57:32.265794 env[1715]: time="2024-02-12T21:57:32.265753904Z" level=info msg="RemovePodSandbox for \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\"" Feb 12 21:57:32.266009 env[1715]: time="2024-02-12T21:57:32.265785948Z" level=info msg="Forcibly stopping sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\"" Feb 12 21:57:32.266009 env[1715]: time="2024-02-12T21:57:32.265956843Z" level=info msg="TearDown network for sandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" successfully" Feb 12 21:57:32.281678 env[1715]: time="2024-02-12T21:57:32.281362863Z" level=info msg="RemovePodSandbox \"f10b72a9db07166a90843cfe6c55e0b577de516d5598c8f0caa000747e5fe097\" returns successfully" Feb 12 21:57:32.282929 env[1715]: time="2024-02-12T21:57:32.282790050Z" level=info msg="StopPodSandbox for \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\"" Feb 12 21:57:32.283243 env[1715]: time="2024-02-12T21:57:32.283034494Z" level=info msg="TearDown network for sandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" successfully" Feb 12 21:57:32.283326 env[1715]: time="2024-02-12T21:57:32.283237917Z" level=info msg="StopPodSandbox for \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" returns successfully" Feb 12 21:57:32.283804 env[1715]: time="2024-02-12T21:57:32.283770271Z" level=info msg="RemovePodSandbox for \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\"" Feb 12 21:57:32.283890 env[1715]: time="2024-02-12T21:57:32.283806635Z" level=info msg="Forcibly stopping sandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\"" Feb 12 21:57:32.283937 env[1715]: time="2024-02-12T21:57:32.283891485Z" level=info msg="TearDown network for sandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" successfully" Feb 12 21:57:32.294281 env[1715]: time="2024-02-12T21:57:32.294230648Z" level=info msg="RemovePodSandbox \"923c7618fc442fc8513871b422f77255afbd306ae256bacff4e3552a978c459c\" returns successfully" Feb 12 21:57:32.338655 kubelet[2183]: E0212 21:57:32.338618 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:33.339204 kubelet[2183]: E0212 21:57:33.339153 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:34.340227 kubelet[2183]: E0212 21:57:34.340173 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:35.341415 kubelet[2183]: E0212 21:57:35.341361 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:36.341802 kubelet[2183]: E0212 21:57:36.341751 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:37.342111 kubelet[2183]: E0212 21:57:37.342056 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:38.342258 kubelet[2183]: E0212 21:57:38.342192 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:39.342652 kubelet[2183]: E0212 21:57:39.342610 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:40.343514 kubelet[2183]: E0212 21:57:40.343468 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:41.344312 kubelet[2183]: E0212 21:57:41.344255 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:42.344961 kubelet[2183]: E0212 21:57:42.344901 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:43.345884 kubelet[2183]: E0212 21:57:43.345830 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:44.346080 kubelet[2183]: E0212 21:57:44.346025 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:45.346298 kubelet[2183]: E0212 21:57:45.346250 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:46.347450 kubelet[2183]: E0212 21:57:46.347387 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:47.348242 kubelet[2183]: E0212 21:57:47.348189 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:47.349561 kubelet[2183]: E0212 21:57:47.349498 2183 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T21:57:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T21:57:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T21:57:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-02-12T21:57:37Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":57035507},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22\\\",\\\"registry.k8s.io/kube-proxy:v1.26.13\\\"],\\\"sizeBytes\\\":23641774},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.23.213\": Patch \"https://172.31.30.174:6443/api/v1/nodes/172.31.23.213/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 12 21:57:47.429809 kubelet[2183]: E0212 21:57:47.429714 2183 controller.go:189] failed to update lease, error: Put "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 21:57:48.348960 kubelet[2183]: E0212 21:57:48.348903 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:49.349310 kubelet[2183]: E0212 21:57:49.349255 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:50.349638 kubelet[2183]: E0212 21:57:50.349599 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:51.350727 kubelet[2183]: E0212 21:57:51.350674 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:52.239333 kubelet[2183]: E0212 21:57:52.239278 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:52.351071 kubelet[2183]: E0212 21:57:52.351019 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:53.351289 kubelet[2183]: E0212 21:57:53.351234 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:54.352116 kubelet[2183]: E0212 21:57:54.352064 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:55.352964 kubelet[2183]: E0212 21:57:55.352909 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:56.354109 kubelet[2183]: E0212 21:57:56.354059 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:56.474663 kubelet[2183]: E0212 21:57:56.474624 2183 controller.go:189] failed to update lease, error: Put "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": unexpected EOF Feb 12 21:57:56.477770 kubelet[2183]: E0212 21:57:56.477729 2183 controller.go:189] failed to update lease, error: Put "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:56.478177 kubelet[2183]: E0212 21:57:56.478148 2183 controller.go:189] failed to update lease, error: Put "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:56.478717 kubelet[2183]: E0212 21:57:56.478690 2183 controller.go:189] failed to update lease, error: Put "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:56.478717 kubelet[2183]: I0212 21:57:56.478719 2183 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 12 21:57:56.479232 kubelet[2183]: E0212 21:57:56.479198 2183 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:56.680322 kubelet[2183]: E0212 21:57:56.680274 2183 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:57.081289 kubelet[2183]: E0212 21:57:57.081179 2183 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:57.354941 kubelet[2183]: E0212 21:57:57.354836 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:57.470830 kubelet[2183]: E0212 21:57:57.470689 2183 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.23.213\": Get \"https://172.31.30.174:6443/api/v1/nodes/172.31.23.213?timeout=10s\": context deadline exceeded - error from a previous attempt: unexpected EOF" Feb 12 21:57:57.471519 kubelet[2183]: E0212 21:57:57.471488 2183 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.23.213\": Get \"https://172.31.30.174:6443/api/v1/nodes/172.31.23.213?timeout=10s\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:57:57.471953 kubelet[2183]: E0212 21:57:57.471935 2183 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.23.213\": Get \"https://172.31.30.174:6443/api/v1/nodes/172.31.23.213?timeout=10s\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:57:57.472624 kubelet[2183]: E0212 21:57:57.472600 2183 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.23.213\": Get \"https://172.31.30.174:6443/api/v1/nodes/172.31.23.213?timeout=10s\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:57:57.472624 kubelet[2183]: E0212 21:57:57.472625 2183 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Feb 12 21:57:57.882695 kubelet[2183]: E0212 21:57:57.882645 2183 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:57:58.355356 kubelet[2183]: E0212 21:57:58.355301 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:59.356364 kubelet[2183]: E0212 21:57:59.356311 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:57:59.483725 kubelet[2183]: E0212 21:57:59.483682 2183 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:58:00.357196 kubelet[2183]: E0212 21:58:00.357143 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:01.358226 kubelet[2183]: E0212 21:58:01.358140 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:02.358898 kubelet[2183]: E0212 21:58:02.358860 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:03.359742 kubelet[2183]: E0212 21:58:03.359656 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:04.360530 kubelet[2183]: E0212 21:58:04.360474 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:05.361469 kubelet[2183]: E0212 21:58:05.361343 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:06.361846 kubelet[2183]: E0212 21:58:06.361796 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:06.550700 amazon-ssm-agent[1781]: 2024-02-12 21:58:06 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 21:58:07.362992 kubelet[2183]: E0212 21:58:07.362937 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:08.363389 kubelet[2183]: E0212 21:58:08.363345 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:09.364136 kubelet[2183]: E0212 21:58:09.364085 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:10.365078 kubelet[2183]: E0212 21:58:10.365025 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:11.365479 kubelet[2183]: E0212 21:58:11.365382 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:12.239880 kubelet[2183]: E0212 21:58:12.239836 2183 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:12.367016 kubelet[2183]: E0212 21:58:12.366960 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:12.685215 kubelet[2183]: E0212 21:58:12.685164 2183 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.213?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 12 21:58:13.367732 kubelet[2183]: E0212 21:58:13.367679 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:14.368277 kubelet[2183]: E0212 21:58:14.368186 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:15.368503 kubelet[2183]: E0212 21:58:15.368463 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:16.368887 kubelet[2183]: E0212 21:58:16.368834 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:17.369710 kubelet[2183]: E0212 21:58:17.369651 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:17.622233 kubelet[2183]: E0212 21:58:17.621909 2183 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.23.213\": Get \"https://172.31.30.174:6443/api/v1/nodes/172.31.23.213?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 12 21:58:18.370488 kubelet[2183]: E0212 21:58:18.370439 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:19.371623 kubelet[2183]: E0212 21:58:19.371573 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:20.372447 kubelet[2183]: E0212 21:58:20.372375 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 12 21:58:21.373101 kubelet[2183]: E0212 21:58:21.373047 2183 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"