Feb 9 18:57:22.193364 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 18:57:22.193400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:57:22.193417 kernel: BIOS-provided physical RAM map: Feb 9 18:57:22.193429 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 18:57:22.193454 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 18:57:22.193464 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 18:57:22.193478 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 9 18:57:22.193486 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 9 18:57:22.193495 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 9 18:57:22.193504 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 18:57:22.193514 kernel: NX (Execute Disable) protection: active Feb 9 18:57:22.193523 kernel: SMBIOS 2.7 present. Feb 9 18:57:22.193533 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 9 18:57:22.193543 kernel: Hypervisor detected: KVM Feb 9 18:57:22.193558 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 18:57:22.193569 kernel: kvm-clock: cpu 0, msr 6afaa001, primary cpu clock Feb 9 18:57:22.193580 kernel: kvm-clock: using sched offset of 6858264605 cycles Feb 9 18:57:22.193594 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 18:57:22.195130 kernel: tsc: Detected 2499.994 MHz processor Feb 9 18:57:22.195154 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 18:57:22.195174 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 18:57:22.195222 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 9 18:57:22.195237 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 18:57:22.195251 kernel: Using GB pages for direct mapping Feb 9 18:57:22.195297 kernel: ACPI: Early table checksum verification disabled Feb 9 18:57:22.195312 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 9 18:57:22.195327 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 9 18:57:22.195340 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 18:57:22.195387 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 9 18:57:22.195406 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 9 18:57:22.195421 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 18:57:22.195482 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 18:57:22.195496 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 9 18:57:22.195508 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 18:57:22.196080 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 9 18:57:22.196167 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 9 18:57:22.196181 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 18:57:22.196198 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 9 18:57:22.196209 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 9 18:57:22.196221 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 9 18:57:22.196237 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 9 18:57:22.196249 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 9 18:57:22.196260 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 9 18:57:22.196272 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 9 18:57:22.196288 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 9 18:57:22.196300 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 9 18:57:22.196313 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 9 18:57:22.196327 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 18:57:22.196340 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 18:57:22.196354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 9 18:57:22.196367 kernel: NUMA: Initialized distance table, cnt=1 Feb 9 18:57:22.196380 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 9 18:57:22.196396 kernel: Zone ranges: Feb 9 18:57:22.196410 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 18:57:22.196423 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 9 18:57:22.196447 kernel: Normal empty Feb 9 18:57:22.196461 kernel: Movable zone start for each node Feb 9 18:57:22.196475 kernel: Early memory node ranges Feb 9 18:57:22.196489 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 18:57:22.196502 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 9 18:57:22.196514 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 9 18:57:22.196530 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 18:57:22.196543 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 18:57:22.196556 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 9 18:57:22.196569 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 18:57:22.196581 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 18:57:22.196594 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 9 18:57:22.196607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 18:57:22.196621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 18:57:22.196632 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 18:57:22.196649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 18:57:22.196663 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 18:57:22.196677 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 18:57:22.196690 kernel: TSC deadline timer available Feb 9 18:57:22.196703 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 18:57:22.196717 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 9 18:57:22.196731 kernel: Booting paravirtualized kernel on KVM Feb 9 18:57:22.196745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 18:57:22.196759 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 18:57:22.196850 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 18:57:22.196868 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 18:57:22.196881 kernel: pcpu-alloc: [0] 0 1 Feb 9 18:57:22.196892 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 9 18:57:22.196904 kernel: kvm-guest: PV spinlocks enabled Feb 9 18:57:22.196916 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 18:57:22.196928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 9 18:57:22.196940 kernel: Policy zone: DMA32 Feb 9 18:57:22.196955 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:57:22.196971 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:57:22.196984 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:57:22.196995 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 18:57:22.197008 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:57:22.197021 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 9 18:57:22.197033 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:57:22.197045 kernel: Kernel/User page tables isolation: enabled Feb 9 18:57:22.197057 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 18:57:22.197072 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 18:57:22.197091 kernel: rcu: Hierarchical RCU implementation. Feb 9 18:57:22.197191 kernel: rcu: RCU event tracing is enabled. Feb 9 18:57:22.197205 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:57:22.197219 kernel: Rude variant of Tasks RCU enabled. Feb 9 18:57:22.197231 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:57:22.197244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:57:22.197257 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:57:22.197269 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 18:57:22.197287 kernel: random: crng init done Feb 9 18:57:22.197351 kernel: Console: colour VGA+ 80x25 Feb 9 18:57:22.197368 kernel: printk: console [ttyS0] enabled Feb 9 18:57:22.197382 kernel: ACPI: Core revision 20210730 Feb 9 18:57:22.197397 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 9 18:57:22.197411 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 18:57:22.197423 kernel: x2apic enabled Feb 9 18:57:22.197474 kernel: Switched APIC routing to physical x2apic. Feb 9 18:57:22.197487 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 9 18:57:22.197503 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Feb 9 18:57:22.197515 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 18:57:22.197527 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 18:57:22.197541 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 18:57:22.197564 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 18:57:22.198236 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 18:57:22.198251 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 18:57:22.198265 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 18:57:22.198279 kernel: RETBleed: Vulnerable Feb 9 18:57:22.198293 kernel: Speculative Store Bypass: Vulnerable Feb 9 18:57:22.198306 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 18:57:22.198319 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 18:57:22.198332 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 18:57:22.198345 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 18:57:22.198364 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 18:57:22.198377 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 18:57:22.198391 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 18:57:22.198404 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 18:57:22.198417 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 18:57:22.198433 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 18:57:22.198457 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 18:57:22.198470 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 9 18:57:22.198483 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 18:57:22.198496 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 18:57:22.198510 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 18:57:22.198523 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 9 18:57:22.198537 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 9 18:57:22.198550 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 9 18:57:22.198564 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 9 18:57:22.198577 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 9 18:57:22.198591 kernel: Freeing SMP alternatives memory: 32K Feb 9 18:57:22.198609 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:57:22.198623 kernel: LSM: Security Framework initializing Feb 9 18:57:22.198636 kernel: SELinux: Initializing. Feb 9 18:57:22.198649 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 18:57:22.198713 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 18:57:22.198726 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 18:57:22.198740 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 18:57:22.198754 kernel: signal: max sigframe size: 3632 Feb 9 18:57:22.198855 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:57:22.198869 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 18:57:22.198886 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:57:22.198900 kernel: x86: Booting SMP configuration: Feb 9 18:57:22.198913 kernel: .... node #0, CPUs: #1 Feb 9 18:57:22.198926 kernel: kvm-clock: cpu 1, msr 6afaa041, secondary cpu clock Feb 9 18:57:22.198940 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 9 18:57:22.198954 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 18:57:22.198969 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 18:57:22.198982 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:57:22.198996 kernel: smpboot: Max logical packages: 1 Feb 9 18:57:22.199012 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Feb 9 18:57:22.199025 kernel: devtmpfs: initialized Feb 9 18:57:22.199038 kernel: x86/mm: Memory block size: 128MB Feb 9 18:57:22.199052 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:57:22.199065 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:57:22.199079 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:57:22.199092 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:57:22.199105 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:57:22.199119 kernel: audit: type=2000 audit(1707505040.389:1): state=initialized audit_enabled=0 res=1 Feb 9 18:57:22.199226 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:57:22.199239 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 18:57:22.199252 kernel: cpuidle: using governor menu Feb 9 18:57:22.199265 kernel: ACPI: bus type PCI registered Feb 9 18:57:22.199278 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:57:22.199292 kernel: dca service started, version 1.12.1 Feb 9 18:57:22.199305 kernel: PCI: Using configuration type 1 for base access Feb 9 18:57:22.199318 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 18:57:22.199332 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:57:22.199348 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:57:22.199361 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:57:22.199374 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:57:22.199387 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:57:22.199401 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:57:22.199414 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:57:22.199427 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:57:22.199451 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:57:22.199465 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 18:57:22.199481 kernel: ACPI: Interpreter enabled Feb 9 18:57:22.199495 kernel: ACPI: PM: (supports S0 S5) Feb 9 18:57:22.199508 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 18:57:22.199522 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 18:57:22.199535 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 18:57:22.199549 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:57:22.199860 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:57:22.199998 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 18:57:22.200019 kernel: acpiphp: Slot [3] registered Feb 9 18:57:22.200033 kernel: acpiphp: Slot [4] registered Feb 9 18:57:22.200046 kernel: acpiphp: Slot [5] registered Feb 9 18:57:22.200058 kernel: acpiphp: Slot [6] registered Feb 9 18:57:22.200070 kernel: acpiphp: Slot [7] registered Feb 9 18:57:22.200082 kernel: acpiphp: Slot [8] registered Feb 9 18:57:22.200094 kernel: acpiphp: Slot [9] registered Feb 9 18:57:22.200107 kernel: acpiphp: Slot [10] registered Feb 9 18:57:22.200120 kernel: acpiphp: Slot [11] registered Feb 9 18:57:22.200137 kernel: acpiphp: Slot [12] registered Feb 9 18:57:22.200151 kernel: acpiphp: Slot [13] registered Feb 9 18:57:22.201447 kernel: acpiphp: Slot [14] registered Feb 9 18:57:22.201465 kernel: acpiphp: Slot [15] registered Feb 9 18:57:22.201482 kernel: acpiphp: Slot [16] registered Feb 9 18:57:22.201498 kernel: acpiphp: Slot [17] registered Feb 9 18:57:22.201513 kernel: acpiphp: Slot [18] registered Feb 9 18:57:22.201528 kernel: acpiphp: Slot [19] registered Feb 9 18:57:22.201544 kernel: acpiphp: Slot [20] registered Feb 9 18:57:22.201564 kernel: acpiphp: Slot [21] registered Feb 9 18:57:22.201579 kernel: acpiphp: Slot [22] registered Feb 9 18:57:22.201595 kernel: acpiphp: Slot [23] registered Feb 9 18:57:22.201610 kernel: acpiphp: Slot [24] registered Feb 9 18:57:22.201625 kernel: acpiphp: Slot [25] registered Feb 9 18:57:22.201641 kernel: acpiphp: Slot [26] registered Feb 9 18:57:22.201656 kernel: acpiphp: Slot [27] registered Feb 9 18:57:22.201672 kernel: acpiphp: Slot [28] registered Feb 9 18:57:22.201687 kernel: acpiphp: Slot [29] registered Feb 9 18:57:22.201703 kernel: acpiphp: Slot [30] registered Feb 9 18:57:22.201722 kernel: acpiphp: Slot [31] registered Feb 9 18:57:22.201737 kernel: PCI host bridge to bus 0000:00 Feb 9 18:57:22.201927 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 18:57:22.202060 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 18:57:22.202184 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 18:57:22.202377 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 18:57:22.202523 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:57:22.202687 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 18:57:22.202982 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 18:57:22.203125 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 9 18:57:22.203267 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 18:57:22.203419 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 18:57:22.203581 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 9 18:57:22.204019 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 9 18:57:22.204833 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 9 18:57:22.204979 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 9 18:57:22.205203 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 9 18:57:22.205390 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 9 18:57:22.205547 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 9 18:57:22.205676 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 9 18:57:22.205803 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 9 18:57:22.205937 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 18:57:22.206071 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 18:57:22.206201 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 9 18:57:22.206343 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 18:57:22.206487 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 9 18:57:22.206506 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 18:57:22.206525 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 18:57:22.206539 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 18:57:22.206553 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 18:57:22.206568 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 18:57:22.206582 kernel: iommu: Default domain type: Translated Feb 9 18:57:22.206801 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 18:57:22.207056 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 9 18:57:22.207189 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 18:57:22.207315 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 9 18:57:22.207338 kernel: vgaarb: loaded Feb 9 18:57:22.207354 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:57:22.207369 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:57:22.207384 kernel: PTP clock support registered Feb 9 18:57:22.207398 kernel: PCI: Using ACPI for IRQ routing Feb 9 18:57:22.207413 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 18:57:22.207428 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 18:57:22.207452 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 9 18:57:22.207470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 18:57:22.207485 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 9 18:57:22.207500 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 18:57:22.207516 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:57:22.207595 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:57:22.207611 kernel: pnp: PnP ACPI init Feb 9 18:57:22.207624 kernel: pnp: PnP ACPI: found 5 devices Feb 9 18:57:22.207638 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 18:57:22.207652 kernel: NET: Registered PF_INET protocol family Feb 9 18:57:22.207670 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:57:22.207737 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 18:57:22.207754 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:57:22.207767 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 18:57:22.207781 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 18:57:22.207793 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 18:57:22.207808 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 18:57:22.207823 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 18:57:22.207838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:57:22.207855 kernel: NET: Registered PF_XDP protocol family Feb 9 18:57:22.207997 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 18:57:22.208228 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 18:57:22.208353 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 18:57:22.208482 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 18:57:22.208618 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 18:57:22.208753 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 18:57:22.208775 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:57:22.208789 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 18:57:22.208804 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 9 18:57:22.208816 kernel: clocksource: Switched to clocksource tsc Feb 9 18:57:22.208829 kernel: Initialise system trusted keyrings Feb 9 18:57:22.208842 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 18:57:22.208856 kernel: Key type asymmetric registered Feb 9 18:57:22.208868 kernel: Asymmetric key parser 'x509' registered Feb 9 18:57:22.208882 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:57:22.208899 kernel: io scheduler mq-deadline registered Feb 9 18:57:22.208913 kernel: io scheduler kyber registered Feb 9 18:57:22.208926 kernel: io scheduler bfq registered Feb 9 18:57:22.208940 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 18:57:22.208954 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:57:22.208967 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 18:57:22.208980 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 18:57:22.208994 kernel: i8042: Warning: Keylock active Feb 9 18:57:22.209008 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 18:57:22.209024 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 18:57:22.209174 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 18:57:22.209433 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 18:57:22.209565 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T18:57:21 UTC (1707505041) Feb 9 18:57:22.209678 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 18:57:22.209694 kernel: intel_pstate: CPU model not supported Feb 9 18:57:22.209707 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:57:22.209721 kernel: Segment Routing with IPv6 Feb 9 18:57:22.209739 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:57:22.209752 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:57:22.209765 kernel: Key type dns_resolver registered Feb 9 18:57:22.209779 kernel: IPI shorthand broadcast: enabled Feb 9 18:57:22.209793 kernel: sched_clock: Marking stable (453574319, 377965907)->(1005939593, -174399367) Feb 9 18:57:22.209805 kernel: registered taskstats version 1 Feb 9 18:57:22.209820 kernel: Loading compiled-in X.509 certificates Feb 9 18:57:22.209834 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 18:57:22.209846 kernel: Key type .fscrypt registered Feb 9 18:57:22.209862 kernel: Key type fscrypt-provisioning registered Feb 9 18:57:22.209876 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:57:22.209889 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:57:22.209902 kernel: ima: No architecture policies found Feb 9 18:57:22.209916 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 18:57:22.209929 kernel: Write protecting the kernel read-only data: 28672k Feb 9 18:57:22.209943 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 18:57:22.209957 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 18:57:22.209969 kernel: Run /init as init process Feb 9 18:57:22.209985 kernel: with arguments: Feb 9 18:57:22.209999 kernel: /init Feb 9 18:57:22.210012 kernel: with environment: Feb 9 18:57:22.210024 kernel: HOME=/ Feb 9 18:57:22.210037 kernel: TERM=linux Feb 9 18:57:22.210050 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:57:22.210067 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:57:22.210087 systemd[1]: Detected virtualization amazon. Feb 9 18:57:22.210101 systemd[1]: Detected architecture x86-64. Feb 9 18:57:22.210115 systemd[1]: Running in initrd. Feb 9 18:57:22.210128 systemd[1]: No hostname configured, using default hostname. Feb 9 18:57:22.210143 systemd[1]: Hostname set to . Feb 9 18:57:22.210175 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:57:22.210193 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 18:57:22.210207 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:57:22.210221 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:57:22.210235 systemd[1]: Reached target cryptsetup.target. Feb 9 18:57:22.210250 systemd[1]: Reached target paths.target. Feb 9 18:57:22.210263 systemd[1]: Reached target slices.target. Feb 9 18:57:22.210277 systemd[1]: Reached target swap.target. Feb 9 18:57:22.210294 systemd[1]: Reached target timers.target. Feb 9 18:57:22.210312 systemd[1]: Listening on iscsid.socket. Feb 9 18:57:22.210326 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:57:22.210341 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:57:22.210356 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:57:22.210372 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:57:22.210387 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:57:22.210401 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:57:22.210415 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:57:22.210430 systemd[1]: Reached target sockets.target. Feb 9 18:57:22.210457 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:57:22.210471 systemd[1]: Finished network-cleanup.service. Feb 9 18:57:22.210485 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:57:22.210500 systemd[1]: Starting systemd-journald.service... Feb 9 18:57:22.210515 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:57:22.210529 systemd[1]: Starting systemd-resolved.service... Feb 9 18:57:22.210543 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:57:22.210564 systemd-journald[185]: Journal started Feb 9 18:57:22.210691 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2fea8f888106812460932539291045) is 4.8M, max 38.7M, 33.9M free. Feb 9 18:57:22.225470 systemd[1]: Started systemd-journald.service. Feb 9 18:57:22.218912 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 18:57:22.390945 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:57:22.390978 kernel: Bridge firewalling registered Feb 9 18:57:22.390997 kernel: SCSI subsystem initialized Feb 9 18:57:22.391014 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:57:22.391031 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:57:22.391052 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:57:22.391071 kernel: audit: type=1130 audit(1707505042.384:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.267092 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 18:57:22.398063 kernel: audit: type=1130 audit(1707505042.392:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.309382 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 18:57:22.346275 systemd-resolved[187]: Positive Trust Anchors: Feb 9 18:57:22.405452 kernel: audit: type=1130 audit(1707505042.400:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.346287 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:57:22.346335 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:57:22.350196 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 18:57:22.385501 systemd[1]: Started systemd-resolved.service. Feb 9 18:57:22.399290 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:57:22.421984 kernel: audit: type=1130 audit(1707505042.415:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.415036 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:57:22.423614 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:57:22.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.426795 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:57:22.432465 kernel: audit: type=1130 audit(1707505042.426:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.435285 systemd[1]: Reached target nss-lookup.target. Feb 9 18:57:22.440466 kernel: audit: type=1130 audit(1707505042.433:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.444742 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:57:22.448010 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:57:22.452713 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:57:22.475707 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:57:22.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.478775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:57:22.489745 kernel: audit: type=1130 audit(1707505042.478:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.489778 kernel: audit: type=1130 audit(1707505042.484:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.493424 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:57:22.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.496688 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:57:22.503334 kernel: audit: type=1130 audit(1707505042.495:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.510264 dracut-cmdline[206]: dracut-dracut-053 Feb 9 18:57:22.513145 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:57:22.588464 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:57:22.602478 kernel: iscsi: registered transport (tcp) Feb 9 18:57:22.629163 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:57:22.629238 kernel: QLogic iSCSI HBA Driver Feb 9 18:57:22.664217 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:57:22.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:22.667199 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:57:22.723501 kernel: raid6: avx512x4 gen() 11458 MB/s Feb 9 18:57:22.740565 kernel: raid6: avx512x4 xor() 7071 MB/s Feb 9 18:57:22.758482 kernel: raid6: avx512x2 gen() 16668 MB/s Feb 9 18:57:22.777487 kernel: raid6: avx512x2 xor() 17789 MB/s Feb 9 18:57:22.794495 kernel: raid6: avx512x1 gen() 15710 MB/s Feb 9 18:57:22.811488 kernel: raid6: avx512x1 xor() 18706 MB/s Feb 9 18:57:22.829492 kernel: raid6: avx2x4 gen() 13494 MB/s Feb 9 18:57:22.846480 kernel: raid6: avx2x4 xor() 6527 MB/s Feb 9 18:57:22.863494 kernel: raid6: avx2x2 gen() 13754 MB/s Feb 9 18:57:22.881567 kernel: raid6: avx2x2 xor() 15751 MB/s Feb 9 18:57:22.899497 kernel: raid6: avx2x1 gen() 9228 MB/s Feb 9 18:57:22.917643 kernel: raid6: avx2x1 xor() 9993 MB/s Feb 9 18:57:22.935494 kernel: raid6: sse2x4 gen() 6558 MB/s Feb 9 18:57:22.952492 kernel: raid6: sse2x4 xor() 3156 MB/s Feb 9 18:57:22.970478 kernel: raid6: sse2x2 gen() 8623 MB/s Feb 9 18:57:22.988482 kernel: raid6: sse2x2 xor() 5179 MB/s Feb 9 18:57:23.009498 kernel: raid6: sse2x1 gen() 6266 MB/s Feb 9 18:57:23.027526 kernel: raid6: sse2x1 xor() 3047 MB/s Feb 9 18:57:23.027604 kernel: raid6: using algorithm avx512x2 gen() 16668 MB/s Feb 9 18:57:23.027623 kernel: raid6: .... xor() 17789 MB/s, rmw enabled Feb 9 18:57:23.028641 kernel: raid6: using avx512x2 recovery algorithm Feb 9 18:57:23.084809 kernel: xor: automatically using best checksumming function avx Feb 9 18:57:23.238578 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 18:57:23.248670 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:57:23.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:23.250000 audit: BPF prog-id=7 op=LOAD Feb 9 18:57:23.250000 audit: BPF prog-id=8 op=LOAD Feb 9 18:57:23.251962 systemd[1]: Starting systemd-udevd.service... Feb 9 18:57:23.267374 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 18:57:23.275398 systemd[1]: Started systemd-udevd.service. Feb 9 18:57:23.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:23.279259 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:57:23.294403 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Feb 9 18:57:23.331010 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:57:23.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:23.334035 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:57:23.400964 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:57:23.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:23.483729 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:57:23.510131 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 18:57:23.510192 kernel: AES CTR mode by8 optimization enabled Feb 9 18:57:23.526784 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 18:57:23.527144 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 18:57:23.536814 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 9 18:57:23.539464 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:62:da:dd:57:97 Feb 9 18:57:23.544006 (udev-worker)[440]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:57:23.795044 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 18:57:23.795239 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 18:57:23.795253 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 18:57:23.795351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:57:23.795363 kernel: GPT:9289727 != 16777215 Feb 9 18:57:23.795374 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:57:23.795384 kernel: GPT:9289727 != 16777215 Feb 9 18:57:23.795397 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:57:23.795408 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:57:23.795421 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (432) Feb 9 18:57:23.727577 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:57:23.800873 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:57:23.816286 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:57:23.824793 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:57:23.827376 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:57:23.829881 systemd[1]: Starting disk-uuid.service... Feb 9 18:57:23.838520 disk-uuid[592]: Primary Header is updated. Feb 9 18:57:23.838520 disk-uuid[592]: Secondary Entries is updated. Feb 9 18:57:23.838520 disk-uuid[592]: Secondary Header is updated. Feb 9 18:57:23.842775 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:57:23.850461 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:57:23.857464 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:57:24.864043 disk-uuid[593]: The operation has completed successfully. Feb 9 18:57:24.865603 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:57:24.997606 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:57:24.997716 systemd[1]: Finished disk-uuid.service. Feb 9 18:57:24.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:24.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.006334 systemd[1]: Starting verity-setup.service... Feb 9 18:57:25.024461 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 18:57:25.107899 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:57:25.111894 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:57:25.118033 systemd[1]: Finished verity-setup.service. Feb 9 18:57:25.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.271479 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:57:25.272422 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:57:25.272804 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:57:25.273737 systemd[1]: Starting ignition-setup.service... Feb 9 18:57:25.282636 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:57:25.303863 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:57:25.303917 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:57:25.303930 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:57:25.316474 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:57:25.328280 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:57:25.381461 systemd[1]: Finished ignition-setup.service. Feb 9 18:57:25.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.384917 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:57:25.401830 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:57:25.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.403000 audit: BPF prog-id=9 op=LOAD Feb 9 18:57:25.404763 systemd[1]: Starting systemd-networkd.service... Feb 9 18:57:25.431554 systemd-networkd[1105]: lo: Link UP Feb 9 18:57:25.431566 systemd-networkd[1105]: lo: Gained carrier Feb 9 18:57:25.432240 systemd-networkd[1105]: Enumeration completed Feb 9 18:57:25.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.432353 systemd[1]: Started systemd-networkd.service. Feb 9 18:57:25.432726 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:57:25.435950 systemd[1]: Reached target network.target. Feb 9 18:57:25.439275 systemd[1]: Starting iscsiuio.service... Feb 9 18:57:25.443574 systemd-networkd[1105]: eth0: Link UP Feb 9 18:57:25.443583 systemd-networkd[1105]: eth0: Gained carrier Feb 9 18:57:25.451731 systemd[1]: Started iscsiuio.service. Feb 9 18:57:25.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.455973 systemd[1]: Starting iscsid.service... Feb 9 18:57:25.461966 iscsid[1110]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:57:25.461966 iscsid[1110]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:57:25.461966 iscsid[1110]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:57:25.461966 iscsid[1110]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:57:25.461966 iscsid[1110]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:57:25.461966 iscsid[1110]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:57:25.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.463941 systemd[1]: Started iscsid.service. Feb 9 18:57:25.476664 systemd-networkd[1105]: eth0: DHCPv4 address 172.31.21.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 18:57:25.477353 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:57:25.498479 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:57:25.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.499745 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:57:25.502176 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:57:25.503599 systemd[1]: Reached target remote-fs.target. Feb 9 18:57:25.505683 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:57:25.517336 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:57:25.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.752076 ignition[1101]: Ignition 2.14.0 Feb 9 18:57:25.752091 ignition[1101]: Stage: fetch-offline Feb 9 18:57:25.752233 ignition[1101]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:25.752277 ignition[1101]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:25.772479 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:25.774537 ignition[1101]: Ignition finished successfully Feb 9 18:57:25.776935 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:57:25.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.778185 systemd[1]: Starting ignition-fetch.service... Feb 9 18:57:25.787577 ignition[1129]: Ignition 2.14.0 Feb 9 18:57:25.787589 ignition[1129]: Stage: fetch Feb 9 18:57:25.787777 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:25.787812 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:25.796764 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:25.798326 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:25.832540 ignition[1129]: INFO : PUT result: OK Feb 9 18:57:25.838317 ignition[1129]: DEBUG : parsed url from cmdline: "" Feb 9 18:57:25.838317 ignition[1129]: INFO : no config URL provided Feb 9 18:57:25.838317 ignition[1129]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:57:25.838317 ignition[1129]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 18:57:25.843499 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:25.843499 ignition[1129]: INFO : PUT result: OK Feb 9 18:57:25.843499 ignition[1129]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 18:57:25.848235 ignition[1129]: INFO : GET result: OK Feb 9 18:57:25.849249 ignition[1129]: DEBUG : parsing config with SHA512: 6c924424c08ec251cf043ba0a02e531f88c27ae60fa0435fb72e7ab7f20f31a240ef429936cbf8dbacb68bd74d88495629def656cef4b71ac0ec36eaa13043c3 Feb 9 18:57:25.879427 unknown[1129]: fetched base config from "system" Feb 9 18:57:25.879759 unknown[1129]: fetched base config from "system" Feb 9 18:57:25.880892 ignition[1129]: fetch: fetch complete Feb 9 18:57:25.879770 unknown[1129]: fetched user config from "aws" Feb 9 18:57:25.880901 ignition[1129]: fetch: fetch passed Feb 9 18:57:25.881217 ignition[1129]: Ignition finished successfully Feb 9 18:57:25.890126 systemd[1]: Finished ignition-fetch.service. Feb 9 18:57:25.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.892569 systemd[1]: Starting ignition-kargs.service... Feb 9 18:57:25.909172 ignition[1135]: Ignition 2.14.0 Feb 9 18:57:25.909185 ignition[1135]: Stage: kargs Feb 9 18:57:25.909375 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:25.909407 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:25.920740 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:25.922206 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:25.926288 ignition[1135]: INFO : PUT result: OK Feb 9 18:57:25.935208 ignition[1135]: kargs: kargs passed Feb 9 18:57:25.935269 ignition[1135]: Ignition finished successfully Feb 9 18:57:25.939303 systemd[1]: Finished ignition-kargs.service. Feb 9 18:57:25.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:25.942608 systemd[1]: Starting ignition-disks.service... Feb 9 18:57:25.989647 ignition[1141]: Ignition 2.14.0 Feb 9 18:57:25.989662 ignition[1141]: Stage: disks Feb 9 18:57:25.989893 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:25.989927 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:25.998727 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:26.000080 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:26.002826 ignition[1141]: INFO : PUT result: OK Feb 9 18:57:26.007023 ignition[1141]: disks: disks passed Feb 9 18:57:26.012273 ignition[1141]: Ignition finished successfully Feb 9 18:57:26.021001 systemd[1]: Finished ignition-disks.service. Feb 9 18:57:26.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.023941 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:57:26.026524 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:57:26.029700 systemd[1]: Reached target local-fs.target. Feb 9 18:57:26.031826 systemd[1]: Reached target sysinit.target. Feb 9 18:57:26.033977 systemd[1]: Reached target basic.target. Feb 9 18:57:26.037182 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:57:26.072353 systemd-fsck[1149]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 18:57:26.088422 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:57:26.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.091742 systemd[1]: Mounting sysroot.mount... Feb 9 18:57:26.105512 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:57:26.106544 systemd[1]: Mounted sysroot.mount. Feb 9 18:57:26.107654 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:57:26.124878 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:57:26.126892 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:57:26.126952 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:57:26.126986 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:57:26.139040 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:57:26.140564 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:57:26.148085 initrd-setup-root[1170]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:57:26.155404 initrd-setup-root[1178]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:57:26.161371 initrd-setup-root[1186]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:57:26.166534 initrd-setup-root[1194]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:57:26.170004 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:57:26.197542 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1197) Feb 9 18:57:26.203538 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:57:26.203610 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:57:26.203629 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:57:26.211456 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:57:26.227605 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:57:26.306282 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:57:26.314029 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:57:26.314090 kernel: audit: type=1130 audit(1707505046.308:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.309873 systemd[1]: Starting ignition-mount.service... Feb 9 18:57:26.320177 systemd[1]: Starting sysroot-boot.service... Feb 9 18:57:26.330282 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:57:26.330415 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:57:26.355600 ignition[1231]: INFO : Ignition 2.14.0 Feb 9 18:57:26.357546 ignition[1231]: INFO : Stage: mount Feb 9 18:57:26.357546 ignition[1231]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:26.357546 ignition[1231]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:26.374229 ignition[1231]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:26.375820 ignition[1231]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:26.378195 ignition[1231]: INFO : PUT result: OK Feb 9 18:57:26.382266 ignition[1231]: INFO : mount: mount passed Feb 9 18:57:26.383268 ignition[1231]: INFO : Ignition finished successfully Feb 9 18:57:26.383767 systemd[1]: Finished ignition-mount.service. Feb 9 18:57:26.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.391015 systemd[1]: Starting ignition-files.service... Feb 9 18:57:26.397051 kernel: audit: type=1130 audit(1707505046.386:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.399679 systemd[1]: Finished sysroot-boot.service. Feb 9 18:57:26.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.406463 kernel: audit: type=1130 audit(1707505046.401:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:26.408968 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:57:26.422467 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1241) Feb 9 18:57:26.425928 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:57:26.425972 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:57:26.425984 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:57:26.432461 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:57:26.436272 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:57:26.448551 ignition[1260]: INFO : Ignition 2.14.0 Feb 9 18:57:26.448551 ignition[1260]: INFO : Stage: files Feb 9 18:57:26.450970 ignition[1260]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:26.450970 ignition[1260]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:26.461837 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:26.463325 ignition[1260]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:26.465452 ignition[1260]: INFO : PUT result: OK Feb 9 18:57:26.469402 ignition[1260]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:57:26.474261 ignition[1260]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:57:26.474261 ignition[1260]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:57:26.501896 ignition[1260]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:57:26.503754 ignition[1260]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:57:26.506364 unknown[1260]: wrote ssh authorized keys file for user: core Feb 9 18:57:26.508032 ignition[1260]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:57:26.510481 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:57:26.512830 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:57:26.515312 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 18:57:26.518124 ignition[1260]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 18:57:26.958616 systemd-networkd[1105]: eth0: Gained IPv6LL Feb 9 18:57:26.983775 ignition[1260]: INFO : GET result: OK Feb 9 18:57:27.267548 ignition[1260]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 18:57:27.270718 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 18:57:27.270718 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 18:57:27.270718 ignition[1260]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 18:57:27.446467 ignition[1260]: INFO : GET result: OK Feb 9 18:57:27.579485 ignition[1260]: DEBUG : file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 18:57:27.583180 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 18:57:27.583180 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 18:57:27.583180 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:57:27.600538 ignition[1260]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3687130044" Feb 9 18:57:27.605343 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1262) Feb 9 18:57:27.605370 ignition[1260]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3687130044": device or resource busy Feb 9 18:57:27.605370 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3687130044", trying btrfs: device or resource busy Feb 9 18:57:27.605370 ignition[1260]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3687130044" Feb 9 18:57:27.605370 ignition[1260]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3687130044" Feb 9 18:57:27.618870 ignition[1260]: INFO : op(3): [started] unmounting "/mnt/oem3687130044" Feb 9 18:57:27.621553 systemd[1]: mnt-oem3687130044.mount: Deactivated successfully. Feb 9 18:57:27.624433 ignition[1260]: INFO : op(3): [finished] unmounting "/mnt/oem3687130044" Feb 9 18:57:27.626085 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 18:57:27.629350 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:57:27.631645 ignition[1260]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 18:57:27.767494 ignition[1260]: INFO : GET result: OK Feb 9 18:57:28.065574 ignition[1260]: DEBUG : file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 18:57:28.068497 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:57:28.068497 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:57:28.068497 ignition[1260]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 18:57:28.132913 ignition[1260]: INFO : GET result: OK Feb 9 18:57:28.896676 ignition[1260]: DEBUG : file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 18:57:28.903941 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:57:28.903941 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:57:28.903941 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:57:28.903941 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:57:28.903941 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:57:28.922212 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:57:28.922212 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:57:28.922212 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 18:57:28.922212 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:57:28.945225 ignition[1260]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1580353036" Feb 9 18:57:28.949577 ignition[1260]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1580353036": device or resource busy Feb 9 18:57:28.949577 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1580353036", trying btrfs: device or resource busy Feb 9 18:57:28.949577 ignition[1260]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1580353036" Feb 9 18:57:28.949577 ignition[1260]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1580353036" Feb 9 18:57:28.949577 ignition[1260]: INFO : op(6): [started] unmounting "/mnt/oem1580353036" Feb 9 18:57:28.949577 ignition[1260]: INFO : op(6): [finished] unmounting "/mnt/oem1580353036" Feb 9 18:57:28.949577 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 18:57:28.949577 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 18:57:28.949577 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:57:28.953852 systemd[1]: mnt-oem1580353036.mount: Deactivated successfully. Feb 9 18:57:28.977186 ignition[1260]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3225139929" Feb 9 18:57:28.979368 ignition[1260]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3225139929": device or resource busy Feb 9 18:57:28.979368 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3225139929", trying btrfs: device or resource busy Feb 9 18:57:28.979368 ignition[1260]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3225139929" Feb 9 18:57:28.985608 ignition[1260]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3225139929" Feb 9 18:57:28.985608 ignition[1260]: INFO : op(9): [started] unmounting "/mnt/oem3225139929" Feb 9 18:57:28.985608 ignition[1260]: INFO : op(9): [finished] unmounting "/mnt/oem3225139929" Feb 9 18:57:28.985608 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 18:57:28.985608 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:57:28.985608 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:57:29.004657 ignition[1260]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2681655380" Feb 9 18:57:29.006504 ignition[1260]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2681655380": device or resource busy Feb 9 18:57:29.006504 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2681655380", trying btrfs: device or resource busy Feb 9 18:57:29.006504 ignition[1260]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2681655380" Feb 9 18:57:29.013335 ignition[1260]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2681655380" Feb 9 18:57:29.013335 ignition[1260]: INFO : op(c): [started] unmounting "/mnt/oem2681655380" Feb 9 18:57:29.013335 ignition[1260]: INFO : op(c): [finished] unmounting "/mnt/oem2681655380" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(12): [started] processing unit "nvidia.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(12): [finished] processing unit "nvidia.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(15): [started] processing unit "containerd.service" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:57:29.013335 ignition[1260]: INFO : files: op(15): [finished] processing unit "containerd.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(17): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(17): op(18): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(17): op(18): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(17): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1d): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 18:57:29.060177 ignition[1260]: INFO : files: op(1d): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 18:57:29.093902 ignition[1260]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:57:29.097730 ignition[1260]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:57:29.100028 ignition[1260]: INFO : files: files passed Feb 9 18:57:29.100028 ignition[1260]: INFO : Ignition finished successfully Feb 9 18:57:29.103635 systemd[1]: Finished ignition-files.service. Feb 9 18:57:29.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.111419 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:57:29.119755 kernel: audit: type=1130 audit(1707505049.103:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.115896 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:57:29.119021 systemd[1]: Starting ignition-quench.service... Feb 9 18:57:29.131143 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:57:29.131278 systemd[1]: Finished ignition-quench.service. Feb 9 18:57:29.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.142398 initrd-setup-root-after-ignition[1285]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:57:29.145959 kernel: audit: type=1130 audit(1707505049.133:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.145988 kernel: audit: type=1131 audit(1707505049.133:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.143822 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:57:29.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.151598 systemd[1]: Reached target ignition-complete.target. Feb 9 18:57:29.160683 kernel: audit: type=1130 audit(1707505049.148:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.162131 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:57:29.193853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:57:29.193988 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:57:29.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.198175 systemd[1]: Reached target initrd-fs.target. Feb 9 18:57:29.208978 kernel: audit: type=1130 audit(1707505049.197:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.209006 kernel: audit: type=1131 audit(1707505049.197:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.211191 systemd[1]: Reached target initrd.target. Feb 9 18:57:29.211422 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:57:29.212761 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:57:29.230659 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:57:29.232016 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:57:29.242343 kernel: audit: type=1130 audit(1707505049.230:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.249727 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:57:29.249952 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:57:29.253483 systemd[1]: Stopped target timers.target. Feb 9 18:57:29.257282 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:57:29.260028 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:57:29.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.262018 systemd[1]: Stopped target initrd.target. Feb 9 18:57:29.263822 systemd[1]: Stopped target basic.target. Feb 9 18:57:29.265703 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:57:29.267765 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:57:29.270058 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:57:29.272341 systemd[1]: Stopped target remote-fs.target. Feb 9 18:57:29.274623 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:57:29.274803 systemd[1]: Stopped target sysinit.target. Feb 9 18:57:29.285746 systemd[1]: Stopped target local-fs.target. Feb 9 18:57:29.294973 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:57:29.306433 systemd[1]: Stopped target swap.target. Feb 9 18:57:29.310997 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:57:29.311207 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:57:29.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.315371 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:57:29.317452 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:57:29.319852 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:57:29.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.322920 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:57:29.323043 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:57:29.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.338163 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:57:29.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.338361 systemd[1]: Stopped ignition-files.service. Feb 9 18:57:29.349206 systemd[1]: Stopping ignition-mount.service... Feb 9 18:57:29.363083 iscsid[1110]: iscsid shutting down. Feb 9 18:57:29.364814 systemd[1]: Stopping iscsid.service... Feb 9 18:57:29.367027 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:57:29.368814 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:57:29.371096 ignition[1298]: INFO : Ignition 2.14.0 Feb 9 18:57:29.371096 ignition[1298]: INFO : Stage: umount Feb 9 18:57:29.371096 ignition[1298]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:57:29.371096 ignition[1298]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:57:29.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.402676 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:57:29.410857 ignition[1298]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:57:29.410857 ignition[1298]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:57:29.410857 ignition[1298]: INFO : PUT result: OK Feb 9 18:57:29.410857 ignition[1298]: INFO : umount: umount passed Feb 9 18:57:29.410857 ignition[1298]: INFO : Ignition finished successfully Feb 9 18:57:29.419794 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:57:29.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.420057 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:57:29.421828 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:57:29.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.422298 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:57:29.432029 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:57:29.432362 systemd[1]: Stopped iscsid.service. Feb 9 18:57:29.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.437449 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:57:29.437678 systemd[1]: Stopped ignition-mount.service. Feb 9 18:57:29.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.442696 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:57:29.442908 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:57:29.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.448375 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:57:29.448550 systemd[1]: Stopped ignition-disks.service. Feb 9 18:57:29.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.458999 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:57:29.459181 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:57:29.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.463683 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:57:29.463858 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:57:29.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.467409 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:57:29.467611 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:57:29.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.471996 systemd[1]: Stopped target paths.target. Feb 9 18:57:29.474176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:57:29.477516 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:57:29.479989 systemd[1]: Stopped target slices.target. Feb 9 18:57:29.483290 systemd[1]: Stopped target sockets.target. Feb 9 18:57:29.485403 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:57:29.485514 systemd[1]: Closed iscsid.socket. Feb 9 18:57:29.489018 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:57:29.489314 systemd[1]: Stopped ignition-setup.service. Feb 9 18:57:29.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.491299 systemd[1]: Stopping iscsiuio.service... Feb 9 18:57:29.497710 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:57:29.498862 systemd[1]: Stopped iscsiuio.service. Feb 9 18:57:29.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.500744 systemd[1]: Stopped target network.target. Feb 9 18:57:29.503152 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:57:29.503211 systemd[1]: Closed iscsiuio.socket. Feb 9 18:57:29.505716 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:57:29.507709 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:57:29.508482 systemd-networkd[1105]: eth0: DHCPv6 lease lost Feb 9 18:57:29.509975 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:57:29.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.514000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:57:29.511186 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:57:29.514526 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:57:29.514571 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:57:29.519779 systemd[1]: Stopping network-cleanup.service... Feb 9 18:57:29.521987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:57:29.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.522065 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:57:29.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.528360 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:57:29.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.528428 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:57:29.530396 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:57:29.530480 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:57:29.532975 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:57:29.541568 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:57:29.541704 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:57:29.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.546538 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:57:29.546707 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:57:29.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.550729 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:57:29.550846 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:57:29.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.554000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:57:29.555543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:57:29.557088 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:57:29.559157 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:57:29.559225 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:57:29.561265 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:57:29.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.561331 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:57:29.563084 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:57:29.563137 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:57:29.564184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:57:29.564260 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:57:29.570690 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:57:29.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.570762 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:57:29.573530 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:57:29.579617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:57:29.581721 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:57:29.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.585372 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:57:29.586530 systemd[1]: Stopped network-cleanup.service. Feb 9 18:57:29.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.588567 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:57:29.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.588660 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:57:29.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:29.592808 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:57:29.596005 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:57:29.605003 systemd[1]: Switching root. Feb 9 18:57:29.606000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:57:29.606000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:57:29.611000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:57:29.611000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:57:29.611000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:57:29.629537 systemd-journald[185]: Journal stopped Feb 9 18:57:33.376420 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 9 18:57:33.385721 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:57:33.385754 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:57:33.385784 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:57:33.385802 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:57:33.385819 kernel: SELinux: policy capability open_perms=1 Feb 9 18:57:33.385838 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:57:33.385856 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:57:33.385880 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:57:33.385897 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:57:33.385914 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:57:33.385939 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:57:33.385962 systemd[1]: Successfully loaded SELinux policy in 50.390ms. Feb 9 18:57:33.385988 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.020ms. Feb 9 18:57:33.386008 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:57:33.386028 systemd[1]: Detected virtualization amazon. Feb 9 18:57:33.386046 systemd[1]: Detected architecture x86-64. Feb 9 18:57:33.386065 systemd[1]: Detected first boot. Feb 9 18:57:33.386083 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:57:33.386103 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:57:33.386123 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:57:33.386146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:57:33.386173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:57:33.386196 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:57:33.386216 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:57:33.386234 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:57:33.386252 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:57:33.386274 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 18:57:33.386292 systemd[1]: Created slice system-getty.slice. Feb 9 18:57:33.386311 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:57:33.386331 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:57:33.386350 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:57:33.386369 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:57:33.386387 systemd[1]: Created slice user.slice. Feb 9 18:57:33.386406 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:57:33.386425 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:57:33.386461 systemd[1]: Set up automount boot.automount. Feb 9 18:57:33.386481 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:57:33.386499 systemd[1]: Reached target integritysetup.target. Feb 9 18:57:33.386516 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:57:33.386534 systemd[1]: Reached target remote-fs.target. Feb 9 18:57:33.386554 systemd[1]: Reached target slices.target. Feb 9 18:57:33.386574 systemd[1]: Reached target swap.target. Feb 9 18:57:33.386595 systemd[1]: Reached target torcx.target. Feb 9 18:57:33.386617 systemd[1]: Reached target veritysetup.target. Feb 9 18:57:33.386634 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:57:33.386651 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:57:33.386668 kernel: kauditd_printk_skb: 51 callbacks suppressed Feb 9 18:57:33.386687 kernel: audit: type=1400 audit(1707505053.133:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:57:33.386708 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:57:33.386728 kernel: audit: type=1335 audit(1707505053.133:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:57:33.386748 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:57:33.386771 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:57:33.386792 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:57:33.386813 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:57:33.386836 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:57:33.386858 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:57:33.386880 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:57:33.386903 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:57:33.386926 systemd[1]: Mounting media.mount... Feb 9 18:57:33.386951 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:57:33.386976 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:57:33.386998 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:57:33.387020 systemd[1]: Mounting tmp.mount... Feb 9 18:57:33.387043 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:57:33.387066 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:57:33.389618 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:57:33.389657 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:57:33.389680 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:57:33.389699 systemd[1]: Starting modprobe@drm.service... Feb 9 18:57:33.389717 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:57:33.389737 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:57:33.389755 systemd[1]: Starting modprobe@loop.service... Feb 9 18:57:33.389777 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:57:33.389805 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 18:57:33.389824 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 18:57:33.389842 systemd[1]: Starting systemd-journald.service... Feb 9 18:57:33.389861 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:57:33.389880 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:57:33.389900 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:57:33.389918 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:57:33.389940 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:57:33.389959 kernel: loop: module loaded Feb 9 18:57:33.389981 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:57:33.390001 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:57:33.390020 systemd[1]: Mounted media.mount. Feb 9 18:57:33.390038 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:57:33.390057 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:57:33.390075 systemd[1]: Mounted tmp.mount. Feb 9 18:57:33.390093 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:57:33.390112 kernel: audit: type=1130 audit(1707505053.336:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.390130 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:57:33.390193 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:57:33.390214 kernel: audit: type=1130 audit(1707505053.349:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.390233 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:57:33.390252 kernel: audit: type=1131 audit(1707505053.349:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.390270 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:57:33.390288 kernel: audit: type=1130 audit(1707505053.364:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.390306 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:57:33.390324 kernel: audit: type=1131 audit(1707505053.364:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.390345 systemd[1]: Finished modprobe@drm.service. Feb 9 18:57:33.390365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:57:33.390384 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:57:33.390403 kernel: audit: type=1305 audit(1707505053.374:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:57:33.390427 systemd-journald[1444]: Journal started Feb 9 18:57:33.390516 systemd-journald[1444]: Runtime Journal (/run/log/journal/ec2fea8f888106812460932539291045) is 4.8M, max 38.7M, 33.9M free. Feb 9 18:57:33.133000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:57:33.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.374000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:57:33.400252 kernel: audit: type=1300 audit(1707505053.374:95): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffedf45bbd0 a2=4000 a3=7ffedf45bc6c items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:57:33.374000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffedf45bbd0 a2=4000 a3=7ffedf45bc6c items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:57:33.411642 systemd[1]: Started systemd-journald.service. Feb 9 18:57:33.406745 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:57:33.407001 systemd[1]: Finished modprobe@loop.service. Feb 9 18:57:33.410010 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:57:33.412104 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:57:33.374000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:57:33.416507 kernel: audit: type=1327 audit(1707505053.374:95): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:57:33.413939 systemd[1]: Reached target network-pre.target. Feb 9 18:57:33.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.424096 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:57:33.427162 kernel: fuse: init (API version 7.34) Feb 9 18:57:33.430528 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:57:33.436201 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:57:33.440235 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:57:33.442608 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:57:33.444378 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:57:33.446627 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:57:33.451898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:57:33.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.461046 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:57:33.464145 systemd-journald[1444]: Time spent on flushing to /var/log/journal/ec2fea8f888106812460932539291045 is 105.021ms for 1132 entries. Feb 9 18:57:33.464145 systemd-journald[1444]: System Journal (/var/log/journal/ec2fea8f888106812460932539291045) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:57:33.601902 systemd-journald[1444]: Received client request to flush runtime journal. Feb 9 18:57:33.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.463563 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:57:33.467758 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:57:33.471926 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:57:33.475317 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:57:33.480494 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:57:33.513736 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:57:33.515263 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:57:33.534681 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:57:33.603334 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:57:33.623980 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:57:33.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.625956 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:57:33.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.629207 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:57:33.632210 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:57:33.655216 udevadm[1499]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:57:33.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:33.694426 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:57:33.697666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:57:33.755290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:57:33.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:34.317818 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:57:34.321091 systemd[1]: Starting systemd-udevd.service... Feb 9 18:57:34.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:34.345435 systemd-udevd[1505]: Using default interface naming scheme 'v252'. Feb 9 18:57:34.403884 systemd[1]: Started systemd-udevd.service. Feb 9 18:57:34.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:34.407449 systemd[1]: Starting systemd-networkd.service... Feb 9 18:57:34.423294 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:57:34.495150 systemd[1]: Found device dev-ttyS0.device. Feb 9 18:57:34.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:34.507840 systemd[1]: Started systemd-userdbd.service. Feb 9 18:57:34.519548 (udev-worker)[1517]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:57:34.591467 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 18:57:34.603472 kernel: ACPI: button: Power Button [PWRF] Feb 9 18:57:34.612460 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 9 18:57:34.627554 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 18:57:34.658875 systemd-networkd[1515]: lo: Link UP Feb 9 18:57:34.658887 systemd-networkd[1515]: lo: Gained carrier Feb 9 18:57:34.660278 systemd-networkd[1515]: Enumeration completed Feb 9 18:57:34.660422 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:57:34.660469 systemd[1]: Started systemd-networkd.service. Feb 9 18:57:34.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:34.664291 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:57:34.667857 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:57:34.668048 systemd-networkd[1515]: eth0: Link UP Feb 9 18:57:34.668220 systemd-networkd[1515]: eth0: Gained carrier Feb 9 18:57:34.679116 systemd-networkd[1515]: eth0: DHCPv4 address 172.31.21.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 18:57:34.633000 audit[1513]: AVC avc: denied { confidentiality } for pid=1513 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:57:34.633000 audit[1513]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55988fe913d0 a1=32194 a2=7fd304019bc5 a3=5 items=108 ppid=1505 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:57:34.633000 audit: CWD cwd="/" Feb 9 18:57:34.633000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=1 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=2 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=3 name=(null) inode=14726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=4 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=5 name=(null) inode=14727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=6 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=7 name=(null) inode=14728 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.705458 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1507) Feb 9 18:57:34.633000 audit: PATH item=8 name=(null) inode=14728 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=9 name=(null) inode=14729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=10 name=(null) inode=14728 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=11 name=(null) inode=14730 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=12 name=(null) inode=14728 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=13 name=(null) inode=14731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=14 name=(null) inode=14728 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=15 name=(null) inode=14732 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=16 name=(null) inode=14728 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=17 name=(null) inode=14733 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=18 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=19 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=20 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=21 name=(null) inode=14735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=22 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=23 name=(null) inode=14736 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=24 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=25 name=(null) inode=14737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=26 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=27 name=(null) inode=14738 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=28 name=(null) inode=14734 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=29 name=(null) inode=14739 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=30 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=31 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=32 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=33 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=34 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=35 name=(null) inode=14742 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=36 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=37 name=(null) inode=14743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=38 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=39 name=(null) inode=14744 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=40 name=(null) inode=14740 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=41 name=(null) inode=14745 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=42 name=(null) inode=14725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=43 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=44 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=45 name=(null) inode=14747 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=46 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=47 name=(null) inode=14748 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=48 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=49 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=50 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=51 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=52 name=(null) inode=14746 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=53 name=(null) inode=14751 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=55 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=56 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=57 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=58 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=59 name=(null) inode=14754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=60 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=61 name=(null) inode=14755 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=62 name=(null) inode=14755 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=63 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=64 name=(null) inode=14755 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=65 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=66 name=(null) inode=14755 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=67 name=(null) inode=14758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=68 name=(null) inode=14755 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=69 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=70 name=(null) inode=14755 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=71 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=72 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=73 name=(null) inode=14761 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=74 name=(null) inode=14761 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=75 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=76 name=(null) inode=14761 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=77 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=78 name=(null) inode=14761 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=79 name=(null) inode=14764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=80 name=(null) inode=14761 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=81 name=(null) inode=14765 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=82 name=(null) inode=14761 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=83 name=(null) inode=14766 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=84 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=85 name=(null) inode=14767 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=86 name=(null) inode=14767 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=87 name=(null) inode=14768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=88 name=(null) inode=14767 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=89 name=(null) inode=14769 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=90 name=(null) inode=14767 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=91 name=(null) inode=14770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=92 name=(null) inode=14767 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=93 name=(null) inode=14771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=94 name=(null) inode=14767 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=95 name=(null) inode=14772 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=96 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=97 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=98 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=99 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=100 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=101 name=(null) inode=14775 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=102 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=103 name=(null) inode=14776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=104 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=105 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=106 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PATH item=107 name=(null) inode=14778 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:57:34.633000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:57:34.722527 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 18:57:34.775751 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 9 18:57:34.776251 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:57:34.875226 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 18:57:34.997258 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:57:34.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.001370 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:57:35.028987 lvm[1620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:57:35.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.062619 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:57:35.064341 systemd[1]: Reached target cryptsetup.target. Feb 9 18:57:35.067540 systemd[1]: Starting lvm2-activation.service... Feb 9 18:57:35.076489 lvm[1622]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:57:35.099904 systemd[1]: Finished lvm2-activation.service. Feb 9 18:57:35.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.101551 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:57:35.102712 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:57:35.102746 systemd[1]: Reached target local-fs.target. Feb 9 18:57:35.104034 systemd[1]: Reached target machines.target. Feb 9 18:57:35.107498 systemd[1]: Starting ldconfig.service... Feb 9 18:57:35.109543 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:57:35.109606 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:57:35.111078 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:57:35.114379 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:57:35.119664 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:57:35.124267 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:57:35.124370 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:57:35.127968 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:57:35.140223 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1625 (bootctl) Feb 9 18:57:35.142169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:57:35.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.163477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:57:35.165803 systemd-tmpfiles[1628]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:57:35.172122 systemd-tmpfiles[1628]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:57:35.175264 systemd-tmpfiles[1628]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:57:35.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.196267 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:57:35.197218 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:57:35.314036 systemd-fsck[1634]: fsck.fat 4.2 (2021-01-31) Feb 9 18:57:35.314036 systemd-fsck[1634]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 9 18:57:35.316524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:57:35.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.320373 systemd[1]: Mounting boot.mount... Feb 9 18:57:35.346068 systemd[1]: Mounted boot.mount. Feb 9 18:57:35.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.390112 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:57:35.464083 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:57:35.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.466953 systemd[1]: Starting audit-rules.service... Feb 9 18:57:35.470124 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:57:35.473140 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:57:35.477192 systemd[1]: Starting systemd-resolved.service... Feb 9 18:57:35.488392 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:57:35.498313 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:57:35.502300 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:57:35.505000 audit[1663]: SYSTEM_BOOT pid=1663 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.511113 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:57:35.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:57:35.515719 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:57:35.596000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:57:35.596000 audit[1675]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc014660d0 a2=420 a3=0 items=0 ppid=1653 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:57:35.596000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:57:35.601031 augenrules[1675]: No rules Feb 9 18:57:35.597754 systemd[1]: Finished audit-rules.service. Feb 9 18:57:35.599216 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:57:35.651619 systemd-resolved[1656]: Positive Trust Anchors: Feb 9 18:57:35.651637 systemd-resolved[1656]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:57:35.651682 systemd-resolved[1656]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:57:35.667623 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:57:35.668873 systemd[1]: Reached target time-set.target. Feb 9 18:57:35.684832 systemd-resolved[1656]: Defaulting to hostname 'linux'. Feb 9 18:57:35.687362 systemd[1]: Started systemd-resolved.service. Feb 9 18:57:35.689148 systemd[1]: Reached target network.target. Feb 9 18:57:35.690429 systemd[1]: Reached target nss-lookup.target. Feb 9 18:57:35.827978 ldconfig[1624]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:57:35.840597 systemd[1]: Finished ldconfig.service. Feb 9 18:57:35.845375 systemd[1]: Starting systemd-update-done.service... Feb 9 18:57:35.857802 systemd[1]: Finished systemd-update-done.service. Feb 9 18:57:35.859299 systemd[1]: Reached target sysinit.target. Feb 9 18:57:35.861740 systemd[1]: Started motdgen.path. Feb 9 18:57:35.862988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:57:35.865007 systemd[1]: Started logrotate.timer. Feb 9 18:57:35.866199 systemd[1]: Started mdadm.timer. Feb 9 18:57:35.867058 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:57:35.868250 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:57:35.868282 systemd[1]: Reached target paths.target. Feb 9 18:57:35.873375 systemd[1]: Reached target timers.target. Feb 9 18:57:35.875352 systemd[1]: Listening on dbus.socket. Feb 9 18:57:35.882972 systemd[1]: Starting docker.socket... Feb 9 18:57:35.890050 systemd[1]: Listening on sshd.socket. Feb 9 18:57:35.891350 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:57:35.893837 systemd[1]: Listening on docker.socket. Feb 9 18:57:35.895007 systemd[1]: Reached target sockets.target. Feb 9 18:57:35.896419 systemd[1]: Reached target basic.target. Feb 9 18:57:35.897609 systemd[1]: System is tainted: cgroupsv1 Feb 9 18:57:35.897665 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:57:35.897698 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:57:35.900259 systemd[1]: Starting containerd.service... Feb 9 18:57:35.903280 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 18:57:35.906217 systemd-timesyncd[1657]: Contacted time server 5.161.184.148:123 (0.flatcar.pool.ntp.org). Feb 9 18:57:35.906278 systemd-timesyncd[1657]: Initial clock synchronization to Fri 2024-02-09 18:57:35.723196 UTC. Feb 9 18:57:35.913391 systemd[1]: Starting dbus.service... Feb 9 18:57:35.917324 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:57:35.920202 systemd[1]: Starting extend-filesystems.service... Feb 9 18:57:35.922339 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:57:35.926424 systemd[1]: Starting motdgen.service... Feb 9 18:57:36.037422 jq[1693]: false Feb 9 18:57:35.929237 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:57:35.933920 systemd[1]: Starting prepare-critools.service... Feb 9 18:57:35.937370 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:57:35.941083 systemd[1]: Starting sshd-keygen.service... Feb 9 18:57:35.948971 systemd[1]: Starting systemd-logind.service... Feb 9 18:57:35.954806 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:57:35.954890 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:57:35.958560 systemd[1]: Starting update-engine.service... Feb 9 18:57:36.056855 jq[1705]: true Feb 9 18:57:35.961997 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:57:36.058098 tar[1708]: crictl Feb 9 18:57:35.973668 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:57:35.974089 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:57:36.065907 tar[1707]: ./ Feb 9 18:57:36.065907 tar[1707]: ./macvlan Feb 9 18:57:36.030059 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:57:36.030615 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:57:36.074146 jq[1718]: true Feb 9 18:57:36.087484 dbus-daemon[1692]: [system] SELinux support is enabled Feb 9 18:57:36.089595 dbus-daemon[1692]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1515 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 18:57:36.087716 systemd[1]: Started dbus.service. Feb 9 18:57:36.093070 dbus-daemon[1692]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:57:36.092280 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:57:36.092312 systemd[1]: Reached target system-config.target. Feb 9 18:57:36.093585 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:57:36.093609 systemd[1]: Reached target user-config.target. Feb 9 18:57:36.100029 systemd[1]: Starting systemd-hostnamed.service... Feb 9 18:57:36.117432 extend-filesystems[1694]: Found nvme0n1 Feb 9 18:57:36.120648 extend-filesystems[1694]: Found nvme0n1p1 Feb 9 18:57:36.129426 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:57:36.130153 systemd[1]: Finished motdgen.service. Feb 9 18:57:36.145191 extend-filesystems[1694]: Found nvme0n1p2 Feb 9 18:57:36.145191 extend-filesystems[1694]: Found nvme0n1p3 Feb 9 18:57:36.145191 extend-filesystems[1694]: Found usr Feb 9 18:57:36.145191 extend-filesystems[1694]: Found nvme0n1p4 Feb 9 18:57:36.145191 extend-filesystems[1694]: Found nvme0n1p6 Feb 9 18:57:36.145191 extend-filesystems[1694]: Found nvme0n1p7 Feb 9 18:57:36.145191 extend-filesystems[1694]: Found nvme0n1p9 Feb 9 18:57:36.145191 extend-filesystems[1694]: Checking size of /dev/nvme0n1p9 Feb 9 18:57:36.177053 extend-filesystems[1694]: Resized partition /dev/nvme0n1p9 Feb 9 18:57:36.190572 update_engine[1703]: I0209 18:57:36.189374 1703 main.cc:92] Flatcar Update Engine starting Feb 9 18:57:36.195294 extend-filesystems[1753]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:57:36.198258 systemd[1]: Started update-engine.service. Feb 9 18:57:36.202029 systemd[1]: Started locksmithd.service. Feb 9 18:57:36.204015 update_engine[1703]: I0209 18:57:36.203655 1703 update_check_scheduler.cc:74] Next update check in 9m8s Feb 9 18:57:36.216459 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 18:57:36.332298 env[1711]: time="2024-02-09T18:57:36.330087177Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:57:36.349457 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 18:57:36.379595 extend-filesystems[1753]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 18:57:36.379595 extend-filesystems[1753]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:57:36.379595 extend-filesystems[1753]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 18:57:36.374556 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:57:36.389800 extend-filesystems[1694]: Resized filesystem in /dev/nvme0n1p9 Feb 9 18:57:36.374856 systemd[1]: Finished extend-filesystems.service. Feb 9 18:57:36.392461 bash[1758]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:57:36.393715 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:57:36.408386 systemd-logind[1702]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 18:57:36.408431 systemd-logind[1702]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 18:57:36.410893 systemd-logind[1702]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:57:36.411783 systemd-logind[1702]: New seat seat0. Feb 9 18:57:36.418067 systemd[1]: Started systemd-logind.service. Feb 9 18:57:36.479823 tar[1707]: ./static Feb 9 18:57:36.490896 env[1711]: time="2024-02-09T18:57:36.490803269Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:57:36.492648 env[1711]: time="2024-02-09T18:57:36.492616843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:57:36.494376 env[1711]: time="2024-02-09T18:57:36.494338167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:57:36.497539 env[1711]: time="2024-02-09T18:57:36.497509893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:57:36.497996 env[1711]: time="2024-02-09T18:57:36.497972501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:57:36.498492 env[1711]: time="2024-02-09T18:57:36.498470187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:57:36.498594 env[1711]: time="2024-02-09T18:57:36.498574270Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:57:36.498663 env[1711]: time="2024-02-09T18:57:36.498649605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:57:36.498836 env[1711]: time="2024-02-09T18:57:36.498819854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:57:36.499193 env[1711]: time="2024-02-09T18:57:36.499173304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:57:36.503681 env[1711]: time="2024-02-09T18:57:36.503643822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:57:36.503802 env[1711]: time="2024-02-09T18:57:36.503785653Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:57:36.503951 env[1711]: time="2024-02-09T18:57:36.503935809Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:57:36.504018 env[1711]: time="2024-02-09T18:57:36.504004669Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:57:36.511229 env[1711]: time="2024-02-09T18:57:36.511180687Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:57:36.511554 env[1711]: time="2024-02-09T18:57:36.511471852Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:57:36.511554 env[1711]: time="2024-02-09T18:57:36.511497942Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:57:36.511768 env[1711]: time="2024-02-09T18:57:36.511705751Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.511768 env[1711]: time="2024-02-09T18:57:36.511729962Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.511929 env[1711]: time="2024-02-09T18:57:36.511751620Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.511929 env[1711]: time="2024-02-09T18:57:36.511874405Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.511929 env[1711]: time="2024-02-09T18:57:36.511896965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.512149 env[1711]: time="2024-02-09T18:57:36.511916419Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.512149 env[1711]: time="2024-02-09T18:57:36.512078654Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.512149 env[1711]: time="2024-02-09T18:57:36.512100551Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.512149 env[1711]: time="2024-02-09T18:57:36.512121416Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:57:36.512458 env[1711]: time="2024-02-09T18:57:36.512420576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:57:36.512618 env[1711]: time="2024-02-09T18:57:36.512602283Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:57:36.513198 env[1711]: time="2024-02-09T18:57:36.513178321Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:57:36.513303 env[1711]: time="2024-02-09T18:57:36.513288196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.513375 env[1711]: time="2024-02-09T18:57:36.513361864Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:57:36.513511 env[1711]: time="2024-02-09T18:57:36.513495708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.513585 env[1711]: time="2024-02-09T18:57:36.513572531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.513670 env[1711]: time="2024-02-09T18:57:36.513656839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.513774 env[1711]: time="2024-02-09T18:57:36.513733723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.513908 env[1711]: time="2024-02-09T18:57:36.513893877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.513999 env[1711]: time="2024-02-09T18:57:36.513985601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.514082 env[1711]: time="2024-02-09T18:57:36.514068741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.514169 env[1711]: time="2024-02-09T18:57:36.514155697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.514267 env[1711]: time="2024-02-09T18:57:36.514253332Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:57:36.514527 env[1711]: time="2024-02-09T18:57:36.514501464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.514624 env[1711]: time="2024-02-09T18:57:36.514608889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.514714 env[1711]: time="2024-02-09T18:57:36.514700200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.514797 env[1711]: time="2024-02-09T18:57:36.514784774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:57:36.514881 env[1711]: time="2024-02-09T18:57:36.514864036Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:57:36.514977 env[1711]: time="2024-02-09T18:57:36.514963156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:57:36.515073 env[1711]: time="2024-02-09T18:57:36.515058487Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:57:36.515182 env[1711]: time="2024-02-09T18:57:36.515169208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:57:36.515657 env[1711]: time="2024-02-09T18:57:36.515571729Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:57:36.518663 env[1711]: time="2024-02-09T18:57:36.515833611Z" level=info msg="Connect containerd service" Feb 9 18:57:36.518663 env[1711]: time="2024-02-09T18:57:36.515899233Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:57:36.518663 env[1711]: time="2024-02-09T18:57:36.517557093Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:57:36.518956 env[1711]: time="2024-02-09T18:57:36.518918623Z" level=info msg="Start subscribing containerd event" Feb 9 18:57:36.522424 env[1711]: time="2024-02-09T18:57:36.522385526Z" level=info msg="Start recovering state" Feb 9 18:57:36.522637 env[1711]: time="2024-02-09T18:57:36.522621843Z" level=info msg="Start event monitor" Feb 9 18:57:36.522723 env[1711]: time="2024-02-09T18:57:36.522711769Z" level=info msg="Start snapshots syncer" Feb 9 18:57:36.522819 env[1711]: time="2024-02-09T18:57:36.522802109Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:57:36.522905 env[1711]: time="2024-02-09T18:57:36.522891996Z" level=info msg="Start streaming server" Feb 9 18:57:36.523370 env[1711]: time="2024-02-09T18:57:36.523337202Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:57:36.523538 env[1711]: time="2024-02-09T18:57:36.523516608Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:57:36.523781 systemd[1]: Started containerd.service. Feb 9 18:57:36.524180 env[1711]: time="2024-02-09T18:57:36.524162490Z" level=info msg="containerd successfully booted in 0.275027s" Feb 9 18:57:36.573673 dbus-daemon[1692]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 18:57:36.573834 systemd[1]: Started systemd-hostnamed.service. Feb 9 18:57:36.576368 dbus-daemon[1692]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1735 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 18:57:36.580241 systemd[1]: Starting polkit.service... Feb 9 18:57:36.616341 polkitd[1809]: Started polkitd version 121 Feb 9 18:57:36.632013 tar[1707]: ./vlan Feb 9 18:57:36.638098 polkitd[1809]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 18:57:36.638179 polkitd[1809]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 18:57:36.640875 polkitd[1809]: Finished loading, compiling and executing 2 rules Feb 9 18:57:36.641402 dbus-daemon[1692]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 18:57:36.641611 systemd[1]: Started polkit.service. Feb 9 18:57:36.642195 polkitd[1809]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 18:57:36.664283 systemd-hostnamed[1735]: Hostname set to (transient) Feb 9 18:57:36.667177 systemd-resolved[1656]: System hostname changed to 'ip-172-31-21-130'. Feb 9 18:57:36.672951 coreos-metadata[1690]: Feb 09 18:57:36.668 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 18:57:36.676342 coreos-metadata[1690]: Feb 09 18:57:36.676 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 18:57:36.677179 coreos-metadata[1690]: Feb 09 18:57:36.677 INFO Fetch successful Feb 9 18:57:36.677179 coreos-metadata[1690]: Feb 09 18:57:36.677 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 18:57:36.678454 coreos-metadata[1690]: Feb 09 18:57:36.678 INFO Fetch successful Feb 9 18:57:36.680789 unknown[1690]: wrote ssh authorized keys file for user: core Feb 9 18:57:36.694581 systemd-networkd[1515]: eth0: Gained IPv6LL Feb 9 18:57:36.703589 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:57:36.707959 systemd[1]: Reached target network-online.target. Feb 9 18:57:36.711798 systemd[1]: Started amazon-ssm-agent.service. Feb 9 18:57:36.715210 systemd[1]: Started nvidia.service. Feb 9 18:57:36.766861 update-ssh-keys[1842]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:57:36.768028 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 18:57:36.945668 amazon-ssm-agent[1845]: 2024/02/09 18:57:36 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 18:57:36.963391 amazon-ssm-agent[1845]: Initializing new seelog logger Feb 9 18:57:36.966649 amazon-ssm-agent[1845]: New Seelog Logger Creation Complete Feb 9 18:57:36.967032 amazon-ssm-agent[1845]: 2024/02/09 18:57:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 18:57:36.970360 amazon-ssm-agent[1845]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 18:57:36.971053 amazon-ssm-agent[1845]: 2024/02/09 18:57:36 processing appconfig overrides Feb 9 18:57:37.009096 tar[1707]: ./portmap Feb 9 18:57:37.083570 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:57:37.181378 tar[1707]: ./host-local Feb 9 18:57:37.346244 tar[1707]: ./vrf Feb 9 18:57:37.507995 tar[1707]: ./bridge Feb 9 18:57:37.613294 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Create new startup processor Feb 9 18:57:37.613730 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 18:57:37.613842 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing bookkeeping folders Feb 9 18:57:37.613936 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO removing the completed state files Feb 9 18:57:37.614011 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing bookkeeping folders for long running plugins Feb 9 18:57:37.614084 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 18:57:37.614157 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing healthcheck folders for long running plugins Feb 9 18:57:37.614229 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing locations for inventory plugin Feb 9 18:57:37.614310 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing default location for custom inventory Feb 9 18:57:37.614384 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing default location for file inventory Feb 9 18:57:37.614476 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Initializing default location for role inventory Feb 9 18:57:37.614553 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Init the cloudwatchlogs publisher Feb 9 18:57:37.614688 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:configureDocker Feb 9 18:57:37.614764 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 18:57:37.614831 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:runDocument Feb 9 18:57:37.614915 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 18:57:37.614986 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 18:57:37.615061 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 18:57:37.615138 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 18:57:37.615210 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:configurePackage Feb 9 18:57:37.615281 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform independent plugin aws:downloadContent Feb 9 18:57:37.615347 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 18:57:37.615410 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 18:57:37.615490 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO OS: linux, Arch: amd64 Feb 9 18:57:37.616683 amazon-ssm-agent[1845]: datastore file /var/lib/amazon/ssm/i-024a2f9429cc886a0/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 18:57:37.624877 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 18:57:37.638463 tar[1707]: ./tuning Feb 9 18:57:37.720907 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 18:57:37.724234 tar[1707]: ./firewall Feb 9 18:57:37.724101 systemd[1]: Finished prepare-critools.service. Feb 9 18:57:37.785095 tar[1707]: ./host-device Feb 9 18:57:37.815188 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 18:57:37.833122 tar[1707]: ./sbr Feb 9 18:57:37.875949 tar[1707]: ./loopback Feb 9 18:57:37.909663 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] Starting message polling Feb 9 18:57:37.915251 tar[1707]: ./dhcp Feb 9 18:57:38.004312 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 18:57:38.025980 tar[1707]: ./ptp Feb 9 18:57:38.074753 tar[1707]: ./ipvlan Feb 9 18:57:38.099159 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [instanceID=i-024a2f9429cc886a0] Starting association polling Feb 9 18:57:38.126615 tar[1707]: ./bandwidth Feb 9 18:57:38.199239 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 18:57:38.202132 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:57:38.244127 locksmithd[1754]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:57:38.294571 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 18:57:38.390112 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 18:57:38.485703 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 18:57:38.581551 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 18:57:38.677848 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [OfflineService] Starting document processing engine... Feb 9 18:57:38.774057 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [OfflineService] [EngineProcessor] Starting Feb 9 18:57:38.841524 sshd_keygen[1736]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:57:38.863278 systemd[1]: Finished sshd-keygen.service. Feb 9 18:57:38.866842 systemd[1]: Starting issuegen.service... Feb 9 18:57:38.870498 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 18:57:38.874028 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:57:38.874349 systemd[1]: Finished issuegen.service. Feb 9 18:57:38.877496 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:57:38.886335 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:57:38.889641 systemd[1]: Started getty@tty1.service. Feb 9 18:57:38.892602 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 18:57:38.893910 systemd[1]: Reached target getty.target. Feb 9 18:57:38.895035 systemd[1]: Reached target multi-user.target. Feb 9 18:57:38.898376 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:57:38.913278 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:57:38.913909 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:57:38.919343 systemd[1]: Startup finished in 8.771s (kernel) + 9.087s (userspace) = 17.858s. Feb 9 18:57:38.967570 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [OfflineService] Starting message polling Feb 9 18:57:39.064358 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [OfflineService] Starting send replies to MDS Feb 9 18:57:39.161364 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 18:57:39.258684 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 18:57:39.356116 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 18:57:39.453894 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 18:57:39.551797 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 18:57:39.649783 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 18:57:39.748041 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-024a2f9429cc886a0, requestId: 829bb912-3453-4c4b-b4ec-2c3ac2650790 Feb 9 18:57:39.846559 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] listening reply. Feb 9 18:57:39.945152 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 18:57:40.043970 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [StartupProcessor] Executing startup processor tasks Feb 9 18:57:40.143054 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 18:57:40.242384 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 18:57:40.341796 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 18:57:40.441462 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-024a2f9429cc886a0?role=subscribe&stream=input Feb 9 18:57:40.541205 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-024a2f9429cc886a0?role=subscribe&stream=input Feb 9 18:57:40.641191 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 18:57:40.741390 amazon-ssm-agent[1845]: 2024-02-09 18:57:37 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 18:57:43.930541 amazon-ssm-agent[1845]: 2024-02-09 18:57:43 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 18:57:44.995350 systemd[1]: Created slice system-sshd.slice. Feb 9 18:57:44.997093 systemd[1]: Started sshd@0-172.31.21.130:22-139.178.68.195:44984.service. Feb 9 18:57:45.200531 sshd[1927]: Accepted publickey for core from 139.178.68.195 port 44984 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:57:45.203931 sshd[1927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:57:45.220909 systemd[1]: Created slice user-500.slice. Feb 9 18:57:45.222715 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:57:45.228210 systemd-logind[1702]: New session 1 of user core. Feb 9 18:57:45.250042 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:57:45.252932 systemd[1]: Starting user@500.service... Feb 9 18:57:45.258909 (systemd)[1932]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:57:45.362240 systemd[1932]: Queued start job for default target default.target. Feb 9 18:57:45.362579 systemd[1932]: Reached target paths.target. Feb 9 18:57:45.362604 systemd[1932]: Reached target sockets.target. Feb 9 18:57:45.362624 systemd[1932]: Reached target timers.target. Feb 9 18:57:45.362643 systemd[1932]: Reached target basic.target. Feb 9 18:57:45.362699 systemd[1932]: Reached target default.target. Feb 9 18:57:45.362737 systemd[1932]: Startup finished in 95ms. Feb 9 18:57:45.363226 systemd[1]: Started user@500.service. Feb 9 18:57:45.364843 systemd[1]: Started session-1.scope. Feb 9 18:57:45.506973 systemd[1]: Started sshd@1-172.31.21.130:22-139.178.68.195:44990.service. Feb 9 18:57:45.686525 sshd[1941]: Accepted publickey for core from 139.178.68.195 port 44990 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:57:45.687926 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:57:45.694034 systemd-logind[1702]: New session 2 of user core. Feb 9 18:57:45.694636 systemd[1]: Started session-2.scope. Feb 9 18:57:45.825348 sshd[1941]: pam_unix(sshd:session): session closed for user core Feb 9 18:57:45.829852 systemd[1]: sshd@1-172.31.21.130:22-139.178.68.195:44990.service: Deactivated successfully. Feb 9 18:57:45.831855 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:57:45.832704 systemd-logind[1702]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:57:45.834061 systemd-logind[1702]: Removed session 2. Feb 9 18:57:45.849306 systemd[1]: Started sshd@2-172.31.21.130:22-139.178.68.195:45006.service. Feb 9 18:57:46.005737 sshd[1948]: Accepted publickey for core from 139.178.68.195 port 45006 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:57:46.007327 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:57:46.014595 systemd[1]: Started session-3.scope. Feb 9 18:57:46.014955 systemd-logind[1702]: New session 3 of user core. Feb 9 18:57:46.133765 sshd[1948]: pam_unix(sshd:session): session closed for user core Feb 9 18:57:46.137576 systemd[1]: sshd@2-172.31.21.130:22-139.178.68.195:45006.service: Deactivated successfully. Feb 9 18:57:46.139112 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:57:46.139134 systemd-logind[1702]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:57:46.140409 systemd-logind[1702]: Removed session 3. Feb 9 18:57:46.158103 systemd[1]: Started sshd@3-172.31.21.130:22-139.178.68.195:58186.service. Feb 9 18:57:46.314283 sshd[1955]: Accepted publickey for core from 139.178.68.195 port 58186 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:57:46.315885 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:57:46.322410 systemd[1]: Started session-4.scope. Feb 9 18:57:46.322800 systemd-logind[1702]: New session 4 of user core. Feb 9 18:57:46.445870 sshd[1955]: pam_unix(sshd:session): session closed for user core Feb 9 18:57:46.449876 systemd[1]: sshd@3-172.31.21.130:22-139.178.68.195:58186.service: Deactivated successfully. Feb 9 18:57:46.451596 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:57:46.452023 systemd-logind[1702]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:57:46.453356 systemd-logind[1702]: Removed session 4. Feb 9 18:57:46.470023 systemd[1]: Started sshd@4-172.31.21.130:22-139.178.68.195:58200.service. Feb 9 18:57:46.627564 sshd[1962]: Accepted publickey for core from 139.178.68.195 port 58200 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:57:46.628996 sshd[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:57:46.634671 systemd[1]: Started session-5.scope. Feb 9 18:57:46.635070 systemd-logind[1702]: New session 5 of user core. Feb 9 18:57:46.759180 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:57:46.759555 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:57:47.341289 systemd[1]: Reloading. Feb 9 18:57:47.417221 /usr/lib/systemd/system-generators/torcx-generator[1996]: time="2024-02-09T18:57:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:57:47.418283 /usr/lib/systemd/system-generators/torcx-generator[1996]: time="2024-02-09T18:57:47Z" level=info msg="torcx already run" Feb 9 18:57:47.550722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:57:47.550743 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:57:47.572317 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:57:47.667810 systemd[1]: Started kubelet.service. Feb 9 18:57:47.682999 systemd[1]: Starting coreos-metadata.service... Feb 9 18:57:47.760472 kubelet[2053]: E0209 18:57:47.760391 2053 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:57:47.762612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:57:47.762965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:57:47.798323 coreos-metadata[2061]: Feb 09 18:57:47.798 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 18:57:47.799431 coreos-metadata[2061]: Feb 09 18:57:47.799 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 9 18:57:47.801003 coreos-metadata[2061]: Feb 09 18:57:47.800 INFO Fetch successful Feb 9 18:57:47.801082 coreos-metadata[2061]: Feb 09 18:57:47.801 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 9 18:57:47.801518 coreos-metadata[2061]: Feb 09 18:57:47.801 INFO Fetch successful Feb 9 18:57:47.801633 coreos-metadata[2061]: Feb 09 18:57:47.801 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 9 18:57:47.802093 coreos-metadata[2061]: Feb 09 18:57:47.802 INFO Fetch successful Feb 9 18:57:47.802188 coreos-metadata[2061]: Feb 09 18:57:47.802 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 9 18:57:47.802669 coreos-metadata[2061]: Feb 09 18:57:47.802 INFO Fetch successful Feb 9 18:57:47.802764 coreos-metadata[2061]: Feb 09 18:57:47.802 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 9 18:57:47.803320 coreos-metadata[2061]: Feb 09 18:57:47.803 INFO Fetch successful Feb 9 18:57:47.803380 coreos-metadata[2061]: Feb 09 18:57:47.803 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 9 18:57:47.803806 coreos-metadata[2061]: Feb 09 18:57:47.803 INFO Fetch successful Feb 9 18:57:47.803806 coreos-metadata[2061]: Feb 09 18:57:47.803 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 9 18:57:47.804378 coreos-metadata[2061]: Feb 09 18:57:47.804 INFO Fetch successful Feb 9 18:57:47.804494 coreos-metadata[2061]: Feb 09 18:57:47.804 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 9 18:57:47.804944 coreos-metadata[2061]: Feb 09 18:57:47.804 INFO Fetch successful Feb 9 18:57:47.816540 systemd[1]: Finished coreos-metadata.service. Feb 9 18:57:48.223049 systemd[1]: Stopped kubelet.service. Feb 9 18:57:48.244990 systemd[1]: Reloading. Feb 9 18:57:48.322350 /usr/lib/systemd/system-generators/torcx-generator[2122]: time="2024-02-09T18:57:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:57:48.322390 /usr/lib/systemd/system-generators/torcx-generator[2122]: time="2024-02-09T18:57:48Z" level=info msg="torcx already run" Feb 9 18:57:48.438127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:57:48.438151 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:57:48.458379 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:57:48.561206 systemd[1]: Started kubelet.service. Feb 9 18:57:48.612196 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:57:48.612196 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:57:48.612669 kubelet[2182]: I0209 18:57:48.612259 2182 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:57:48.613890 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:57:48.613890 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:57:49.215828 kubelet[2182]: I0209 18:57:49.215767 2182 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:57:49.215828 kubelet[2182]: I0209 18:57:49.215821 2182 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:57:49.216324 kubelet[2182]: I0209 18:57:49.216301 2182 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:57:49.218920 kubelet[2182]: I0209 18:57:49.218898 2182 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:57:49.221914 kubelet[2182]: I0209 18:57:49.221883 2182 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:57:49.222524 kubelet[2182]: I0209 18:57:49.222504 2182 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:57:49.222601 kubelet[2182]: I0209 18:57:49.222593 2182 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:57:49.222818 kubelet[2182]: I0209 18:57:49.222620 2182 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:57:49.222818 kubelet[2182]: I0209 18:57:49.222732 2182 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:57:49.222988 kubelet[2182]: I0209 18:57:49.222853 2182 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:57:49.239155 kubelet[2182]: I0209 18:57:49.239111 2182 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:57:49.239326 kubelet[2182]: I0209 18:57:49.239316 2182 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:57:49.239498 kubelet[2182]: I0209 18:57:49.239485 2182 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:57:49.239649 kubelet[2182]: I0209 18:57:49.239637 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:57:49.239904 kubelet[2182]: E0209 18:57:49.239889 2182 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:49.239976 kubelet[2182]: E0209 18:57:49.239939 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:49.240743 kubelet[2182]: I0209 18:57:49.240728 2182 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:57:49.241264 kubelet[2182]: W0209 18:57:49.241248 2182 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:57:49.241847 kubelet[2182]: I0209 18:57:49.241832 2182 server.go:1186] "Started kubelet" Feb 9 18:57:49.243660 kubelet[2182]: I0209 18:57:49.243641 2182 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:57:49.246047 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:57:49.246109 kubelet[2182]: I0209 18:57:49.246051 2182 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:57:49.246284 kubelet[2182]: I0209 18:57:49.246271 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:57:49.251936 kubelet[2182]: E0209 18:57:49.251793 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d050563762", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 241808738, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 241808738, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.253163 kubelet[2182]: E0209 18:57:49.252921 2182 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:57:49.253163 kubelet[2182]: E0209 18:57:49.252953 2182 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:57:49.254226 kubelet[2182]: W0209 18:57:49.254160 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:49.254733 kubelet[2182]: E0209 18:57:49.254251 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:49.254733 kubelet[2182]: W0209 18:57:49.254719 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:49.254829 kubelet[2182]: E0209 18:57:49.254742 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:49.256950 kubelet[2182]: I0209 18:57:49.256930 2182 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:57:49.257047 kubelet[2182]: I0209 18:57:49.257034 2182 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:57:49.257338 kubelet[2182]: E0209 18:57:49.257324 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:57:49.259293 kubelet[2182]: E0209 18:57:49.259276 2182 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.21.130" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:57:49.259501 kubelet[2182]: E0209 18:57:49.259403 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d051000a18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 252938264, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 252938264, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.259719 kubelet[2182]: W0209 18:57:49.259535 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:49.259719 kubelet[2182]: E0209 18:57:49.259555 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:49.339220 kubelet[2182]: I0209 18:57:49.339194 2182 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:57:49.339377 kubelet[2182]: I0209 18:57:49.339364 2182 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:57:49.339509 kubelet[2182]: I0209 18:57:49.339498 2182 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:57:49.342267 kubelet[2182]: I0209 18:57:49.342248 2182 policy_none.go:49] "None policy: Start" Feb 9 18:57:49.343126 kubelet[2182]: E0209 18:57:49.342967 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.344047 kubelet[2182]: I0209 18:57:49.344034 2182 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:57:49.344187 kubelet[2182]: I0209 18:57:49.344177 2182 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:57:49.358129 kubelet[2182]: I0209 18:57:49.358086 2182 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:57:49.363052 kubelet[2182]: I0209 18:57:49.363025 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:57:49.364871 kubelet[2182]: E0209 18:57:49.364621 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.379664 kubelet[2182]: E0209 18:57:49.379342 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.379940 kubelet[2182]: I0209 18:57:49.379921 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:57:49.380334 kubelet[2182]: E0209 18:57:49.380316 2182 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.130\" not found" Feb 9 18:57:49.381724 kubelet[2182]: E0209 18:57:49.381699 2182 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.130" Feb 9 18:57:49.381808 kubelet[2182]: E0209 18:57:49.381738 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d057844e6f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 362269807, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 362269807, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.383982 kubelet[2182]: E0209 18:57:49.383910 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 379876606, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560fdb8b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.385472 kubelet[2182]: E0209 18:57:49.385082 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 379883866, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560ff43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.386322 kubelet[2182]: E0209 18:57:49.386189 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 379889051, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d056100fa1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.461351 kubelet[2182]: E0209 18:57:49.461114 2182 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.21.130" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:57:49.485365 kubelet[2182]: I0209 18:57:49.485267 2182 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:57:49.513336 kubelet[2182]: I0209 18:57:49.513308 2182 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:57:49.513336 kubelet[2182]: I0209 18:57:49.513332 2182 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:57:49.513567 kubelet[2182]: I0209 18:57:49.513353 2182 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:57:49.513567 kubelet[2182]: E0209 18:57:49.513402 2182 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:57:49.515384 kubelet[2182]: W0209 18:57:49.515299 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:49.515527 kubelet[2182]: E0209 18:57:49.515400 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:49.586323 kubelet[2182]: I0209 18:57:49.586293 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:57:49.589166 kubelet[2182]: E0209 18:57:49.589137 2182 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.130" Feb 9 18:57:49.590255 kubelet[2182]: E0209 18:57:49.590172 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 586174449, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560fdb8b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.592232 kubelet[2182]: E0209 18:57:49.592151 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 586218748, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560ff43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.644594 kubelet[2182]: E0209 18:57:49.644375 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 586223458, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d056100fa1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:49.863618 kubelet[2182]: E0209 18:57:49.863392 2182 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.21.130" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:57:49.990266 kubelet[2182]: I0209 18:57:49.990227 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:57:49.992580 kubelet[2182]: E0209 18:57:49.992556 2182 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.130" Feb 9 18:57:49.992691 kubelet[2182]: E0209 18:57:49.992550 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 990179955, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560fdb8b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:50.044523 kubelet[2182]: E0209 18:57:50.044404 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 990191564, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560ff43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:50.110871 kubelet[2182]: W0209 18:57:50.110835 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:50.110871 kubelet[2182]: E0209 18:57:50.110871 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:50.240822 kubelet[2182]: E0209 18:57:50.240702 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:50.246116 kubelet[2182]: E0209 18:57:50.245992 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 990195464, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d056100fa1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:50.668262 kubelet[2182]: E0209 18:57:50.668159 2182 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.21.130" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:57:50.704430 kubelet[2182]: W0209 18:57:50.703343 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:50.704430 kubelet[2182]: E0209 18:57:50.704446 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:50.774401 kubelet[2182]: W0209 18:57:50.774360 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:50.774401 kubelet[2182]: E0209 18:57:50.774403 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:50.794508 kubelet[2182]: I0209 18:57:50.794480 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:57:50.797093 kubelet[2182]: E0209 18:57:50.797054 2182 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.130" Feb 9 18:57:50.806624 kubelet[2182]: E0209 18:57:50.806528 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 50, 794412967, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560fdb8b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:50.810656 kubelet[2182]: E0209 18:57:50.810557 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 50, 794425970, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560ff43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:50.845360 kubelet[2182]: E0209 18:57:50.845198 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 50, 794434065, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d056100fa1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:51.024936 kubelet[2182]: W0209 18:57:51.024829 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:51.024936 kubelet[2182]: E0209 18:57:51.024864 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:51.240882 kubelet[2182]: E0209 18:57:51.240849 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:52.241884 kubelet[2182]: E0209 18:57:52.241776 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:52.270062 kubelet[2182]: E0209 18:57:52.270009 2182 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.21.130" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:57:52.398747 kubelet[2182]: I0209 18:57:52.398719 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:57:52.400598 kubelet[2182]: E0209 18:57:52.400577 2182 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.130" Feb 9 18:57:52.400716 kubelet[2182]: E0209 18:57:52.400570 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 52, 398660766, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560fdb8b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:52.402479 kubelet[2182]: E0209 18:57:52.402399 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 52, 398667611, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560ff43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:52.403696 kubelet[2182]: E0209 18:57:52.403629 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 52, 398690174, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d056100fa1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:52.530942 kubelet[2182]: W0209 18:57:52.530843 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:52.530942 kubelet[2182]: E0209 18:57:52.530879 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:52.539329 kubelet[2182]: W0209 18:57:52.539294 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:52.539329 kubelet[2182]: E0209 18:57:52.539332 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:53.079625 kubelet[2182]: W0209 18:57:53.079531 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:53.079625 kubelet[2182]: E0209 18:57:53.079626 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:53.214704 kubelet[2182]: W0209 18:57:53.214665 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:53.214704 kubelet[2182]: E0209 18:57:53.214705 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:53.243006 kubelet[2182]: E0209 18:57:53.242957 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:54.243924 kubelet[2182]: E0209 18:57:54.243884 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:55.244384 kubelet[2182]: E0209 18:57:55.244332 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:55.472386 kubelet[2182]: E0209 18:57:55.472344 2182 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.21.130" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:57:55.602195 kubelet[2182]: I0209 18:57:55.602082 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:57:55.603938 kubelet[2182]: E0209 18:57:55.603908 2182 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.130" Feb 9 18:57:55.604454 kubelet[2182]: E0209 18:57:55.603893 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560fdb8b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.21.130 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337861003, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 55, 601942725, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560fdb8b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:55.605465 kubelet[2182]: E0209 18:57:55.605389 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d0560ff43d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.21.130 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337867325, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 55, 602044283, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d0560ff43d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:55.606866 kubelet[2182]: E0209 18:57:55.606721 2182 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.130.17b246d056100fa1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.21.130", UID:"172.31.21.130", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.21.130 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.21.130"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 57, 49, 337874337, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 57, 55, 602049410, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.21.130.17b246d056100fa1" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:57:56.244746 kubelet[2182]: E0209 18:57:56.244697 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:56.922819 kubelet[2182]: W0209 18:57:56.922674 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:56.922819 kubelet[2182]: E0209 18:57:56.922822 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:57:57.190962 kubelet[2182]: W0209 18:57:57.190855 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:57.190962 kubelet[2182]: E0209 18:57:57.190895 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:57:57.245499 kubelet[2182]: E0209 18:57:57.245457 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:57.280553 kubelet[2182]: W0209 18:57:57.280520 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:57.280553 kubelet[2182]: E0209 18:57:57.280556 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:57:57.367526 kubelet[2182]: W0209 18:57:57.367481 2182 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:57.367526 kubelet[2182]: E0209 18:57:57.367519 2182 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:57:58.246633 kubelet[2182]: E0209 18:57:58.246581 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:59.218854 kubelet[2182]: I0209 18:57:59.218795 2182 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 18:57:59.247403 kubelet[2182]: E0209 18:57:59.247355 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:57:59.380809 kubelet[2182]: E0209 18:57:59.380761 2182 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.130\" not found" Feb 9 18:57:59.624839 kubelet[2182]: E0209 18:57:59.624739 2182 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.21.130" not found Feb 9 18:58:00.247531 kubelet[2182]: E0209 18:58:00.247471 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:00.659256 kubelet[2182]: E0209 18:58:00.658858 2182 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.21.130" not found Feb 9 18:58:01.248175 kubelet[2182]: E0209 18:58:01.248130 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:01.890336 kubelet[2182]: E0209 18:58:01.890304 2182 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.21.130\" not found" node="172.31.21.130" Feb 9 18:58:02.005556 kubelet[2182]: I0209 18:58:02.005511 2182 kubelet_node_status.go:70] "Attempting to register node" node="172.31.21.130" Feb 9 18:58:02.061797 kubelet[2182]: I0209 18:58:02.061763 2182 kubelet_node_status.go:73] "Successfully registered node" node="172.31.21.130" Feb 9 18:58:02.164302 kubelet[2182]: E0209 18:58:02.164181 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.177514 sudo[1966]: pam_unix(sudo:session): session closed for user root Feb 9 18:58:02.202596 sshd[1962]: pam_unix(sshd:session): session closed for user core Feb 9 18:58:02.206849 systemd[1]: sshd@4-172.31.21.130:22-139.178.68.195:58200.service: Deactivated successfully. Feb 9 18:58:02.210192 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:58:02.215887 systemd-logind[1702]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:58:02.220482 systemd-logind[1702]: Removed session 5. Feb 9 18:58:02.249611 kubelet[2182]: E0209 18:58:02.249491 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:02.264543 kubelet[2182]: E0209 18:58:02.264497 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.365362 kubelet[2182]: E0209 18:58:02.365313 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.466419 kubelet[2182]: E0209 18:58:02.466307 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.567424 kubelet[2182]: E0209 18:58:02.567379 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.667791 kubelet[2182]: E0209 18:58:02.667664 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.768577 kubelet[2182]: E0209 18:58:02.768459 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.869102 kubelet[2182]: E0209 18:58:02.869057 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:02.969747 kubelet[2182]: E0209 18:58:02.969707 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.070668 kubelet[2182]: E0209 18:58:03.070557 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.171202 kubelet[2182]: E0209 18:58:03.171153 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.250014 kubelet[2182]: E0209 18:58:03.249963 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:03.271300 kubelet[2182]: E0209 18:58:03.271251 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.372085 kubelet[2182]: E0209 18:58:03.371972 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.472820 kubelet[2182]: E0209 18:58:03.472773 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.574288 kubelet[2182]: E0209 18:58:03.574240 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.675113 kubelet[2182]: E0209 18:58:03.675008 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.775887 kubelet[2182]: E0209 18:58:03.775839 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.876559 kubelet[2182]: E0209 18:58:03.876507 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:03.977192 kubelet[2182]: E0209 18:58:03.977084 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.077750 kubelet[2182]: E0209 18:58:04.077703 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.178326 kubelet[2182]: E0209 18:58:04.178284 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.251238 kubelet[2182]: E0209 18:58:04.251121 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:04.278533 kubelet[2182]: E0209 18:58:04.278485 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.379046 kubelet[2182]: E0209 18:58:04.379000 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.479720 kubelet[2182]: E0209 18:58:04.479673 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.580475 kubelet[2182]: E0209 18:58:04.580355 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.681024 kubelet[2182]: E0209 18:58:04.680978 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.781529 kubelet[2182]: E0209 18:58:04.781485 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.882324 kubelet[2182]: E0209 18:58:04.882214 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:04.982852 kubelet[2182]: E0209 18:58:04.982808 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.083380 kubelet[2182]: E0209 18:58:05.083335 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.184103 kubelet[2182]: E0209 18:58:05.183992 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.252170 kubelet[2182]: E0209 18:58:05.252116 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:05.284621 kubelet[2182]: E0209 18:58:05.284574 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.385485 kubelet[2182]: E0209 18:58:05.385426 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.486298 kubelet[2182]: E0209 18:58:05.486067 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.586409 kubelet[2182]: E0209 18:58:05.586363 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.686935 kubelet[2182]: E0209 18:58:05.686890 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.787861 kubelet[2182]: E0209 18:58:05.787751 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.888728 kubelet[2182]: E0209 18:58:05.888679 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:05.989473 kubelet[2182]: E0209 18:58:05.989414 2182 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.21.130\" not found" Feb 9 18:58:06.091240 kubelet[2182]: I0209 18:58:06.091135 2182 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 18:58:06.093121 env[1711]: time="2024-02-09T18:58:06.093070169Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:58:06.094094 kubelet[2182]: I0209 18:58:06.094067 2182 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 18:58:06.249747 kubelet[2182]: I0209 18:58:06.249701 2182 apiserver.go:52] "Watching apiserver" Feb 9 18:58:06.252223 kubelet[2182]: E0209 18:58:06.252193 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:06.252698 kubelet[2182]: I0209 18:58:06.252370 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:58:06.252698 kubelet[2182]: I0209 18:58:06.252449 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:58:06.266672 kubelet[2182]: I0209 18:58:06.266644 2182 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:58:06.354023 kubelet[2182]: I0209 18:58:06.353472 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a3421ee-572e-4121-9050-de7c1324e8f7-kube-proxy\") pod \"kube-proxy-4g5rx\" (UID: \"1a3421ee-572e-4121-9050-de7c1324e8f7\") " pod="kube-system/kube-proxy-4g5rx" Feb 9 18:58:06.354023 kubelet[2182]: I0209 18:58:06.353533 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-run\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354023 kubelet[2182]: I0209 18:58:06.353566 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-bpf-maps\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354023 kubelet[2182]: I0209 18:58:06.353594 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-config-path\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354023 kubelet[2182]: I0209 18:58:06.353624 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-hostproc\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354023 kubelet[2182]: I0209 18:58:06.353654 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-cgroup\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354519 kubelet[2182]: I0209 18:58:06.353690 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-hubble-tls\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354519 kubelet[2182]: I0209 18:58:06.353724 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg96p\" (UniqueName: \"kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-kube-api-access-tg96p\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354519 kubelet[2182]: I0209 18:58:06.353764 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a3421ee-572e-4121-9050-de7c1324e8f7-xtables-lock\") pod \"kube-proxy-4g5rx\" (UID: \"1a3421ee-572e-4121-9050-de7c1324e8f7\") " pod="kube-system/kube-proxy-4g5rx" Feb 9 18:58:06.354519 kubelet[2182]: I0209 18:58:06.353844 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-etc-cni-netd\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354519 kubelet[2182]: I0209 18:58:06.354004 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-lib-modules\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.354519 kubelet[2182]: I0209 18:58:06.354042 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/102dedc8-1b17-4155-95ee-fef965d1c1eb-clustermesh-secrets\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.355166 kubelet[2182]: I0209 18:58:06.354071 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a3421ee-572e-4121-9050-de7c1324e8f7-lib-modules\") pod \"kube-proxy-4g5rx\" (UID: \"1a3421ee-572e-4121-9050-de7c1324e8f7\") " pod="kube-system/kube-proxy-4g5rx" Feb 9 18:58:06.355166 kubelet[2182]: I0209 18:58:06.354096 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfm2m\" (UniqueName: \"kubernetes.io/projected/1a3421ee-572e-4121-9050-de7c1324e8f7-kube-api-access-wfm2m\") pod \"kube-proxy-4g5rx\" (UID: \"1a3421ee-572e-4121-9050-de7c1324e8f7\") " pod="kube-system/kube-proxy-4g5rx" Feb 9 18:58:06.355166 kubelet[2182]: I0209 18:58:06.354126 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cni-path\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.355166 kubelet[2182]: I0209 18:58:06.354253 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-xtables-lock\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.355166 kubelet[2182]: I0209 18:58:06.354285 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-net\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.355453 kubelet[2182]: I0209 18:58:06.354329 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-kernel\") pod \"cilium-24wzr\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " pod="kube-system/cilium-24wzr" Feb 9 18:58:06.355453 kubelet[2182]: I0209 18:58:06.354342 2182 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:58:06.557931 env[1711]: time="2024-02-09T18:58:06.557877630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24wzr,Uid:102dedc8-1b17-4155-95ee-fef965d1c1eb,Namespace:kube-system,Attempt:0,}" Feb 9 18:58:06.693832 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 18:58:06.866809 env[1711]: time="2024-02-09T18:58:06.866752227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4g5rx,Uid:1a3421ee-572e-4121-9050-de7c1324e8f7,Namespace:kube-system,Attempt:0,}" Feb 9 18:58:07.248349 env[1711]: time="2024-02-09T18:58:07.248299127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.250093 env[1711]: time="2024-02-09T18:58:07.250056125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.253182 kubelet[2182]: E0209 18:58:07.253147 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:07.256067 env[1711]: time="2024-02-09T18:58:07.256019230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.259183 env[1711]: time="2024-02-09T18:58:07.259137904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.260816 env[1711]: time="2024-02-09T18:58:07.260785981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.262936 env[1711]: time="2024-02-09T18:58:07.262908271Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.266923 env[1711]: time="2024-02-09T18:58:07.266886727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.269990 env[1711]: time="2024-02-09T18:58:07.269954380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:07.314425 env[1711]: time="2024-02-09T18:58:07.314349043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:07.314632 env[1711]: time="2024-02-09T18:58:07.314481540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:07.314632 env[1711]: time="2024-02-09T18:58:07.314541631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:07.314887 env[1711]: time="2024-02-09T18:58:07.314803556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa pid=2283 runtime=io.containerd.runc.v2 Feb 9 18:58:07.322664 env[1711]: time="2024-02-09T18:58:07.322565152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:07.323510 env[1711]: time="2024-02-09T18:58:07.322620768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:07.323510 env[1711]: time="2024-02-09T18:58:07.322653217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:07.323510 env[1711]: time="2024-02-09T18:58:07.322914412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e14d6f71d81de399017e8017c77983869c2c7ce6a2df27f75e0fbf79bd565530 pid=2284 runtime=io.containerd.runc.v2 Feb 9 18:58:07.423032 env[1711]: time="2024-02-09T18:58:07.422982015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24wzr,Uid:102dedc8-1b17-4155-95ee-fef965d1c1eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\"" Feb 9 18:58:07.425396 env[1711]: time="2024-02-09T18:58:07.425352412Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:58:07.444731 env[1711]: time="2024-02-09T18:58:07.444677019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4g5rx,Uid:1a3421ee-572e-4121-9050-de7c1324e8f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e14d6f71d81de399017e8017c77983869c2c7ce6a2df27f75e0fbf79bd565530\"" Feb 9 18:58:07.481856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255234226.mount: Deactivated successfully. Feb 9 18:58:08.253273 kubelet[2182]: E0209 18:58:08.253233 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:09.239745 kubelet[2182]: E0209 18:58:09.239694 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:09.254156 kubelet[2182]: E0209 18:58:09.254085 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:10.254893 kubelet[2182]: E0209 18:58:10.254847 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:11.255774 kubelet[2182]: E0209 18:58:11.255738 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:12.256517 kubelet[2182]: E0209 18:58:12.256477 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:13.258056 kubelet[2182]: E0209 18:58:13.257934 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:13.959422 amazon-ssm-agent[1845]: 2024-02-09 18:58:13 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 18:58:14.258534 kubelet[2182]: E0209 18:58:14.258169 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:14.643844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496353366.mount: Deactivated successfully. Feb 9 18:58:15.258370 kubelet[2182]: E0209 18:58:15.258325 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:16.259025 kubelet[2182]: E0209 18:58:16.258935 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:17.259305 kubelet[2182]: E0209 18:58:17.259115 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:18.259664 kubelet[2182]: E0209 18:58:18.259610 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:18.316553 env[1711]: time="2024-02-09T18:58:18.316432827Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:18.319422 env[1711]: time="2024-02-09T18:58:18.319381406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:18.322424 env[1711]: time="2024-02-09T18:58:18.322384976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:18.323207 env[1711]: time="2024-02-09T18:58:18.323169551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 18:58:18.324735 env[1711]: time="2024-02-09T18:58:18.324706928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:58:18.326301 env[1711]: time="2024-02-09T18:58:18.326266112Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:58:18.357640 env[1711]: time="2024-02-09T18:58:18.357586333Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\"" Feb 9 18:58:18.358614 env[1711]: time="2024-02-09T18:58:18.358580860Z" level=info msg="StartContainer for \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\"" Feb 9 18:58:18.452131 env[1711]: time="2024-02-09T18:58:18.450193942Z" level=info msg="StartContainer for \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\" returns successfully" Feb 9 18:58:18.998007 env[1711]: time="2024-02-09T18:58:18.997851761Z" level=info msg="shim disconnected" id=2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0 Feb 9 18:58:18.998007 env[1711]: time="2024-02-09T18:58:18.998007039Z" level=warning msg="cleaning up after shim disconnected" id=2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0 namespace=k8s.io Feb 9 18:58:18.998567 env[1711]: time="2024-02-09T18:58:18.998021434Z" level=info msg="cleaning up dead shim" Feb 9 18:58:19.013266 env[1711]: time="2024-02-09T18:58:19.013223991Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:58:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2402 runtime=io.containerd.runc.v2\n" Feb 9 18:58:19.260661 kubelet[2182]: E0209 18:58:19.260363 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:19.342797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0-rootfs.mount: Deactivated successfully. Feb 9 18:58:19.575042 env[1711]: time="2024-02-09T18:58:19.574995037Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:58:19.606035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196524670.mount: Deactivated successfully. Feb 9 18:58:19.627461 env[1711]: time="2024-02-09T18:58:19.627399549Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\"" Feb 9 18:58:19.628527 env[1711]: time="2024-02-09T18:58:19.628490481Z" level=info msg="StartContainer for \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\"" Feb 9 18:58:19.715474 env[1711]: time="2024-02-09T18:58:19.706666891Z" level=info msg="StartContainer for \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\" returns successfully" Feb 9 18:58:19.723423 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:58:19.723737 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:58:19.724706 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:58:19.729364 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:58:19.750160 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:58:19.835249 env[1711]: time="2024-02-09T18:58:19.835122609Z" level=info msg="shim disconnected" id=6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74 Feb 9 18:58:19.835249 env[1711]: time="2024-02-09T18:58:19.835178415Z" level=warning msg="cleaning up after shim disconnected" id=6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74 namespace=k8s.io Feb 9 18:58:19.835249 env[1711]: time="2024-02-09T18:58:19.835190995Z" level=info msg="cleaning up dead shim" Feb 9 18:58:19.847558 env[1711]: time="2024-02-09T18:58:19.847510860Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:58:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2475 runtime=io.containerd.runc.v2\n" Feb 9 18:58:20.261354 kubelet[2182]: E0209 18:58:20.261139 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:20.343534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299482330.mount: Deactivated successfully. Feb 9 18:58:20.343736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142564142.mount: Deactivated successfully. Feb 9 18:58:20.578519 env[1711]: time="2024-02-09T18:58:20.578472340Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:58:20.609810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117403894.mount: Deactivated successfully. Feb 9 18:58:20.629689 env[1711]: time="2024-02-09T18:58:20.629636532Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\"" Feb 9 18:58:20.630696 env[1711]: time="2024-02-09T18:58:20.630667829Z" level=info msg="StartContainer for \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\"" Feb 9 18:58:20.736636 env[1711]: time="2024-02-09T18:58:20.736586321Z" level=info msg="StartContainer for \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\" returns successfully" Feb 9 18:58:20.817281 env[1711]: time="2024-02-09T18:58:20.816284676Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:20.830978 env[1711]: time="2024-02-09T18:58:20.830854090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:20.856158 env[1711]: time="2024-02-09T18:58:20.856108275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:20.883290 env[1711]: time="2024-02-09T18:58:20.883232764Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:20.883594 env[1711]: time="2024-02-09T18:58:20.883561478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 18:58:20.886341 env[1711]: time="2024-02-09T18:58:20.886297000Z" level=info msg="CreateContainer within sandbox \"e14d6f71d81de399017e8017c77983869c2c7ce6a2df27f75e0fbf79bd565530\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:58:21.058718 env[1711]: time="2024-02-09T18:58:21.058659311Z" level=info msg="shim disconnected" id=dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252 Feb 9 18:58:21.058718 env[1711]: time="2024-02-09T18:58:21.058715918Z" level=warning msg="cleaning up after shim disconnected" id=dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252 namespace=k8s.io Feb 9 18:58:21.059031 env[1711]: time="2024-02-09T18:58:21.058728471Z" level=info msg="cleaning up dead shim" Feb 9 18:58:21.060262 env[1711]: time="2024-02-09T18:58:21.060217979Z" level=info msg="CreateContainer within sandbox \"e14d6f71d81de399017e8017c77983869c2c7ce6a2df27f75e0fbf79bd565530\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd31d877345fb9f755625ab319878613ac8255c347c349003b429273b2bf0814\"" Feb 9 18:58:21.061299 env[1711]: time="2024-02-09T18:58:21.061271383Z" level=info msg="StartContainer for \"bd31d877345fb9f755625ab319878613ac8255c347c349003b429273b2bf0814\"" Feb 9 18:58:21.076268 update_engine[1703]: I0209 18:58:21.075486 1703 update_attempter.cc:509] Updating boot flags... Feb 9 18:58:21.082173 env[1711]: time="2024-02-09T18:58:21.082041548Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:58:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2537 runtime=io.containerd.runc.v2\n" Feb 9 18:58:21.219731 env[1711]: time="2024-02-09T18:58:21.219581290Z" level=info msg="StartContainer for \"bd31d877345fb9f755625ab319878613ac8255c347c349003b429273b2bf0814\" returns successfully" Feb 9 18:58:21.262291 kubelet[2182]: E0209 18:58:21.261984 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:21.346687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252-rootfs.mount: Deactivated successfully. Feb 9 18:58:21.585408 env[1711]: time="2024-02-09T18:58:21.585368228Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:58:21.617222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470070711.mount: Deactivated successfully. Feb 9 18:58:21.619613 kubelet[2182]: I0209 18:58:21.618251 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4g5rx" podStartSLOduration=-9.223372017236588e+09 pod.CreationTimestamp="2024-02-09 18:58:02 +0000 UTC" firstStartedPulling="2024-02-09 18:58:07.44595357 +0000 UTC m=+18.879702783" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:58:21.591812552 +0000 UTC m=+33.025561780" watchObservedRunningTime="2024-02-09 18:58:21.61818745 +0000 UTC m=+33.051936672" Feb 9 18:58:21.628763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193062106.mount: Deactivated successfully. Feb 9 18:58:21.635813 env[1711]: time="2024-02-09T18:58:21.635760179Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\"" Feb 9 18:58:21.636412 env[1711]: time="2024-02-09T18:58:21.636374419Z" level=info msg="StartContainer for \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\"" Feb 9 18:58:21.693369 env[1711]: time="2024-02-09T18:58:21.693321187Z" level=info msg="StartContainer for \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\" returns successfully" Feb 9 18:58:21.733420 env[1711]: time="2024-02-09T18:58:21.733362338Z" level=info msg="shim disconnected" id=9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b Feb 9 18:58:21.733420 env[1711]: time="2024-02-09T18:58:21.733418396Z" level=warning msg="cleaning up after shim disconnected" id=9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b namespace=k8s.io Feb 9 18:58:21.733733 env[1711]: time="2024-02-09T18:58:21.733431527Z" level=info msg="cleaning up dead shim" Feb 9 18:58:21.742878 env[1711]: time="2024-02-09T18:58:21.742831176Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:58:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2915 runtime=io.containerd.runc.v2\n" Feb 9 18:58:22.262991 kubelet[2182]: E0209 18:58:22.262863 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:22.591660 env[1711]: time="2024-02-09T18:58:22.591603278Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:58:22.626162 env[1711]: time="2024-02-09T18:58:22.626060854Z" level=info msg="CreateContainer within sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\"" Feb 9 18:58:22.627307 env[1711]: time="2024-02-09T18:58:22.627238825Z" level=info msg="StartContainer for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\"" Feb 9 18:58:22.727352 env[1711]: time="2024-02-09T18:58:22.727303052Z" level=info msg="StartContainer for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" returns successfully" Feb 9 18:58:22.905553 kubelet[2182]: I0209 18:58:22.904120 2182 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:58:23.247475 kernel: Initializing XFRM netlink socket Feb 9 18:58:23.264614 kubelet[2182]: E0209 18:58:23.264564 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:23.608111 kubelet[2182]: I0209 18:58:23.608076 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-24wzr" podStartSLOduration=-9.223372015246738e+09 pod.CreationTimestamp="2024-02-09 18:58:02 +0000 UTC" firstStartedPulling="2024-02-09 18:58:07.424892945 +0000 UTC m=+18.858642162" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:58:23.607818212 +0000 UTC m=+35.041567442" watchObservedRunningTime="2024-02-09 18:58:23.608036519 +0000 UTC m=+35.041785748" Feb 9 18:58:24.265690 kubelet[2182]: E0209 18:58:24.265640 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:24.917749 (udev-worker)[2586]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:58:24.919843 (udev-worker)[2789]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:58:24.924199 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 18:58:24.924289 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:58:24.925307 systemd-networkd[1515]: cilium_host: Link UP Feb 9 18:58:24.925528 systemd-networkd[1515]: cilium_net: Link UP Feb 9 18:58:24.925842 systemd-networkd[1515]: cilium_net: Gained carrier Feb 9 18:58:24.926010 systemd-networkd[1515]: cilium_host: Gained carrier Feb 9 18:58:24.926130 systemd-networkd[1515]: cilium_net: Gained IPv6LL Feb 9 18:58:24.926324 systemd-networkd[1515]: cilium_host: Gained IPv6LL Feb 9 18:58:25.123023 systemd-networkd[1515]: cilium_vxlan: Link UP Feb 9 18:58:25.123032 systemd-networkd[1515]: cilium_vxlan: Gained carrier Feb 9 18:58:25.266524 kubelet[2182]: E0209 18:58:25.266380 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:25.400465 kernel: NET: Registered PF_ALG protocol family Feb 9 18:58:26.212968 systemd-networkd[1515]: lxc_health: Link UP Feb 9 18:58:26.231970 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:58:26.235599 systemd-networkd[1515]: lxc_health: Gained carrier Feb 9 18:58:26.266808 kubelet[2182]: E0209 18:58:26.266771 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:26.350655 systemd-networkd[1515]: cilium_vxlan: Gained IPv6LL Feb 9 18:58:26.727047 kubelet[2182]: I0209 18:58:26.726987 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:58:26.846499 kubelet[2182]: I0209 18:58:26.846461 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czzqw\" (UniqueName: \"kubernetes.io/projected/6fe8815e-4686-474e-8ce6-641ccedc6b92-kube-api-access-czzqw\") pod \"nginx-deployment-8ffc5cf85-c6rsv\" (UID: \"6fe8815e-4686-474e-8ce6-641ccedc6b92\") " pod="default/nginx-deployment-8ffc5cf85-c6rsv" Feb 9 18:58:27.034210 env[1711]: time="2024-02-09T18:58:27.034089258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-c6rsv,Uid:6fe8815e-4686-474e-8ce6-641ccedc6b92,Namespace:default,Attempt:0,}" Feb 9 18:58:27.118757 systemd-networkd[1515]: lxc0860f03e9e0b: Link UP Feb 9 18:58:27.124467 kernel: eth0: renamed from tmp6df12 Feb 9 18:58:27.133586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0860f03e9e0b: link becomes ready Feb 9 18:58:27.132593 systemd-networkd[1515]: lxc0860f03e9e0b: Gained carrier Feb 9 18:58:27.267650 kubelet[2182]: E0209 18:58:27.267599 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:27.912777 systemd-networkd[1515]: lxc_health: Gained IPv6LL Feb 9 18:58:28.268471 kubelet[2182]: E0209 18:58:28.268345 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:29.102816 systemd-networkd[1515]: lxc0860f03e9e0b: Gained IPv6LL Feb 9 18:58:29.240542 kubelet[2182]: E0209 18:58:29.240504 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:29.269059 kubelet[2182]: E0209 18:58:29.269019 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:30.269744 kubelet[2182]: E0209 18:58:30.269705 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:31.270559 kubelet[2182]: E0209 18:58:31.270519 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:32.272121 kubelet[2182]: E0209 18:58:32.272066 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:32.537925 env[1711]: time="2024-02-09T18:58:32.537779098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:32.537925 env[1711]: time="2024-02-09T18:58:32.537826179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:32.539073 env[1711]: time="2024-02-09T18:58:32.537843348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:32.539612 env[1711]: time="2024-02-09T18:58:32.539554597Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6df123dda6f6b20646721a8691980052e31ac183bf588d2d6861b6c9144e9e89 pid=3426 runtime=io.containerd.runc.v2 Feb 9 18:58:32.629572 env[1711]: time="2024-02-09T18:58:32.629527287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-c6rsv,Uid:6fe8815e-4686-474e-8ce6-641ccedc6b92,Namespace:default,Attempt:0,} returns sandbox id \"6df123dda6f6b20646721a8691980052e31ac183bf588d2d6861b6c9144e9e89\"" Feb 9 18:58:32.631340 env[1711]: time="2024-02-09T18:58:32.631214758Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:58:33.273315 kubelet[2182]: E0209 18:58:33.273268 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:34.274011 kubelet[2182]: E0209 18:58:34.273973 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:35.274341 kubelet[2182]: E0209 18:58:35.274302 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:35.682927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055444491.mount: Deactivated successfully. Feb 9 18:58:36.275581 kubelet[2182]: E0209 18:58:36.275533 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:36.871189 env[1711]: time="2024-02-09T18:58:36.871137817Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:36.875213 env[1711]: time="2024-02-09T18:58:36.875162893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:36.877672 env[1711]: time="2024-02-09T18:58:36.877637018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:36.880248 env[1711]: time="2024-02-09T18:58:36.880211961Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:36.881345 env[1711]: time="2024-02-09T18:58:36.881308962Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 18:58:36.884161 env[1711]: time="2024-02-09T18:58:36.884122227Z" level=info msg="CreateContainer within sandbox \"6df123dda6f6b20646721a8691980052e31ac183bf588d2d6861b6c9144e9e89\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 18:58:36.908208 env[1711]: time="2024-02-09T18:58:36.908160997Z" level=info msg="CreateContainer within sandbox \"6df123dda6f6b20646721a8691980052e31ac183bf588d2d6861b6c9144e9e89\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f1854015aaeb623f20afdfd979b2c71486d71fe2c78be214a5709e7837a5b290\"" Feb 9 18:58:36.909007 env[1711]: time="2024-02-09T18:58:36.908971431Z" level=info msg="StartContainer for \"f1854015aaeb623f20afdfd979b2c71486d71fe2c78be214a5709e7837a5b290\"" Feb 9 18:58:36.994391 env[1711]: time="2024-02-09T18:58:36.994341878Z" level=info msg="StartContainer for \"f1854015aaeb623f20afdfd979b2c71486d71fe2c78be214a5709e7837a5b290\" returns successfully" Feb 9 18:58:37.276768 kubelet[2182]: E0209 18:58:37.276648 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:37.645526 kubelet[2182]: I0209 18:58:37.645489 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-c6rsv" podStartSLOduration=-9.223372025209408e+09 pod.CreationTimestamp="2024-02-09 18:58:26 +0000 UTC" firstStartedPulling="2024-02-09 18:58:32.630876956 +0000 UTC m=+44.064626163" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:58:37.645346739 +0000 UTC m=+49.079095970" watchObservedRunningTime="2024-02-09 18:58:37.645368861 +0000 UTC m=+49.079118071" Feb 9 18:58:38.277081 kubelet[2182]: E0209 18:58:38.277026 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:39.277665 kubelet[2182]: E0209 18:58:39.277613 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:40.278575 kubelet[2182]: E0209 18:58:40.278516 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:40.714571 kubelet[2182]: I0209 18:58:40.714541 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:58:40.895291 kubelet[2182]: I0209 18:58:40.895241 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq2j\" (UniqueName: \"kubernetes.io/projected/9d2dad79-5fb5-4e8c-bc5f-193a81d9caab-kube-api-access-9rq2j\") pod \"nfs-server-provisioner-0\" (UID: \"9d2dad79-5fb5-4e8c-bc5f-193a81d9caab\") " pod="default/nfs-server-provisioner-0" Feb 9 18:58:40.895291 kubelet[2182]: I0209 18:58:40.895298 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9d2dad79-5fb5-4e8c-bc5f-193a81d9caab-data\") pod \"nfs-server-provisioner-0\" (UID: \"9d2dad79-5fb5-4e8c-bc5f-193a81d9caab\") " pod="default/nfs-server-provisioner-0" Feb 9 18:58:41.022409 env[1711]: time="2024-02-09T18:58:41.022295460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9d2dad79-5fb5-4e8c-bc5f-193a81d9caab,Namespace:default,Attempt:0,}" Feb 9 18:58:41.079738 systemd-networkd[1515]: lxc9506ee73d7ff: Link UP Feb 9 18:58:41.083540 (udev-worker)[3580]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:58:41.093462 kernel: eth0: renamed from tmpe7290 Feb 9 18:58:41.102998 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:58:41.103112 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9506ee73d7ff: link becomes ready Feb 9 18:58:41.103283 systemd-networkd[1515]: lxc9506ee73d7ff: Gained carrier Feb 9 18:58:41.280849 kubelet[2182]: E0209 18:58:41.280724 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:41.452884 env[1711]: time="2024-02-09T18:58:41.452788531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:41.452884 env[1711]: time="2024-02-09T18:58:41.452855157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:41.453112 env[1711]: time="2024-02-09T18:58:41.452871685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:41.453784 env[1711]: time="2024-02-09T18:58:41.453646141Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7290a658b617a48be1a4dd84ed706a452460288dfcf4fad25df4e0bd4bcecdb pid=3594 runtime=io.containerd.runc.v2 Feb 9 18:58:41.546988 env[1711]: time="2024-02-09T18:58:41.546801635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9d2dad79-5fb5-4e8c-bc5f-193a81d9caab,Namespace:default,Attempt:0,} returns sandbox id \"e7290a658b617a48be1a4dd84ed706a452460288dfcf4fad25df4e0bd4bcecdb\"" Feb 9 18:58:41.550644 env[1711]: time="2024-02-09T18:58:41.550565434Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 18:58:42.281726 kubelet[2182]: E0209 18:58:42.281676 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:42.287059 systemd-networkd[1515]: lxc9506ee73d7ff: Gained IPv6LL Feb 9 18:58:43.282592 kubelet[2182]: E0209 18:58:43.282516 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:44.282873 kubelet[2182]: E0209 18:58:44.282811 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:45.166175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630354431.mount: Deactivated successfully. Feb 9 18:58:45.283923 kubelet[2182]: E0209 18:58:45.283885 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:46.284109 kubelet[2182]: E0209 18:58:46.284041 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:47.284924 kubelet[2182]: E0209 18:58:47.284830 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:47.959174 env[1711]: time="2024-02-09T18:58:47.959118669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:47.962393 env[1711]: time="2024-02-09T18:58:47.962340702Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:47.967465 env[1711]: time="2024-02-09T18:58:47.967340029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:47.970082 env[1711]: time="2024-02-09T18:58:47.970048159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:47.970878 env[1711]: time="2024-02-09T18:58:47.970841279Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 18:58:47.980808 env[1711]: time="2024-02-09T18:58:47.980764019Z" level=info msg="CreateContainer within sandbox \"e7290a658b617a48be1a4dd84ed706a452460288dfcf4fad25df4e0bd4bcecdb\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 18:58:47.998370 env[1711]: time="2024-02-09T18:58:47.998320215Z" level=info msg="CreateContainer within sandbox \"e7290a658b617a48be1a4dd84ed706a452460288dfcf4fad25df4e0bd4bcecdb\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8642c67307db6341e410723d3004a4c35b33e2ebc47829de976f6d66fd341bee\"" Feb 9 18:58:47.999214 env[1711]: time="2024-02-09T18:58:47.999182329Z" level=info msg="StartContainer for \"8642c67307db6341e410723d3004a4c35b33e2ebc47829de976f6d66fd341bee\"" Feb 9 18:58:48.030774 systemd[1]: run-containerd-runc-k8s.io-8642c67307db6341e410723d3004a4c35b33e2ebc47829de976f6d66fd341bee-runc.GZmBZK.mount: Deactivated successfully. Feb 9 18:58:48.075481 env[1711]: time="2024-02-09T18:58:48.072779752Z" level=info msg="StartContainer for \"8642c67307db6341e410723d3004a4c35b33e2ebc47829de976f6d66fd341bee\" returns successfully" Feb 9 18:58:48.285543 kubelet[2182]: E0209 18:58:48.285325 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:48.687107 kubelet[2182]: I0209 18:58:48.686635 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372028168186e+09 pod.CreationTimestamp="2024-02-09 18:58:40 +0000 UTC" firstStartedPulling="2024-02-09 18:58:41.5498552 +0000 UTC m=+52.983604406" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:58:48.686124238 +0000 UTC m=+60.119873468" watchObservedRunningTime="2024-02-09 18:58:48.686589354 +0000 UTC m=+60.120338583" Feb 9 18:58:49.240547 kubelet[2182]: E0209 18:58:49.240499 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:49.286485 kubelet[2182]: E0209 18:58:49.286430 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:50.287188 kubelet[2182]: E0209 18:58:50.287132 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:51.287812 kubelet[2182]: E0209 18:58:51.287760 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:52.288537 kubelet[2182]: E0209 18:58:52.288475 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:53.289637 kubelet[2182]: E0209 18:58:53.289585 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:54.290407 kubelet[2182]: E0209 18:58:54.290352 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:55.291124 kubelet[2182]: E0209 18:58:55.291077 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:56.291923 kubelet[2182]: E0209 18:58:56.291870 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:57.292232 kubelet[2182]: E0209 18:58:57.292182 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:57.676010 kubelet[2182]: I0209 18:58:57.675968 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:58:57.809973 kubelet[2182]: I0209 18:58:57.809936 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38b8c21e-e100-4ac0-8458-c1efe5d8d3b8\" (UniqueName: \"kubernetes.io/nfs/e980e009-15aa-4c0f-bf26-57aa722d4b51-pvc-38b8c21e-e100-4ac0-8458-c1efe5d8d3b8\") pod \"test-pod-1\" (UID: \"e980e009-15aa-4c0f-bf26-57aa722d4b51\") " pod="default/test-pod-1" Feb 9 18:58:57.810172 kubelet[2182]: I0209 18:58:57.810000 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2pw\" (UniqueName: \"kubernetes.io/projected/e980e009-15aa-4c0f-bf26-57aa722d4b51-kube-api-access-zn2pw\") pod \"test-pod-1\" (UID: \"e980e009-15aa-4c0f-bf26-57aa722d4b51\") " pod="default/test-pod-1" Feb 9 18:58:57.963466 kernel: FS-Cache: Loaded Feb 9 18:58:58.010279 kernel: RPC: Registered named UNIX socket transport module. Feb 9 18:58:58.010464 kernel: RPC: Registered udp transport module. Feb 9 18:58:58.010502 kernel: RPC: Registered tcp transport module. Feb 9 18:58:58.011245 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 18:58:58.065479 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 18:58:58.286282 kernel: NFS: Registering the id_resolver key type Feb 9 18:58:58.286454 kernel: Key type id_resolver registered Feb 9 18:58:58.286481 kernel: Key type id_legacy registered Feb 9 18:58:58.292678 kubelet[2182]: E0209 18:58:58.292647 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:58.326479 nfsidmap[3806]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 18:58:58.331623 nfsidmap[3807]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 18:58:58.580359 env[1711]: time="2024-02-09T18:58:58.580308995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e980e009-15aa-4c0f-bf26-57aa722d4b51,Namespace:default,Attempt:0,}" Feb 9 18:58:58.627615 (udev-worker)[3794]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:58:58.627711 systemd-networkd[1515]: lxc082b6c11c993: Link UP Feb 9 18:58:58.634189 (udev-worker)[3803]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:58:58.639549 kernel: eth0: renamed from tmp4e3de Feb 9 18:58:58.649396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:58:58.649724 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc082b6c11c993: link becomes ready Feb 9 18:58:58.649706 systemd-networkd[1515]: lxc082b6c11c993: Gained carrier Feb 9 18:58:58.912551 env[1711]: time="2024-02-09T18:58:58.912023437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:58:58.912757 env[1711]: time="2024-02-09T18:58:58.912062971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:58:58.912757 env[1711]: time="2024-02-09T18:58:58.912078565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:58:58.912757 env[1711]: time="2024-02-09T18:58:58.912339082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e3de7765821a60abee978b087bb9f65722b1a30449b6fa9f70bdeabe9e145aa pid=3830 runtime=io.containerd.runc.v2 Feb 9 18:58:58.995711 env[1711]: time="2024-02-09T18:58:58.995659934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e980e009-15aa-4c0f-bf26-57aa722d4b51,Namespace:default,Attempt:0,} returns sandbox id \"4e3de7765821a60abee978b087bb9f65722b1a30449b6fa9f70bdeabe9e145aa\"" Feb 9 18:58:58.997530 env[1711]: time="2024-02-09T18:58:58.997496844Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:58:59.293868 kubelet[2182]: E0209 18:58:59.293714 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:58:59.323777 env[1711]: time="2024-02-09T18:58:59.323719827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:59.326383 env[1711]: time="2024-02-09T18:58:59.326318812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:59.328608 env[1711]: time="2024-02-09T18:58:59.328571955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:59.330773 env[1711]: time="2024-02-09T18:58:59.330735522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:58:59.331503 env[1711]: time="2024-02-09T18:58:59.331468260Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 18:58:59.333903 env[1711]: time="2024-02-09T18:58:59.333863519Z" level=info msg="CreateContainer within sandbox \"4e3de7765821a60abee978b087bb9f65722b1a30449b6fa9f70bdeabe9e145aa\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 18:58:59.350396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22368137.mount: Deactivated successfully. Feb 9 18:58:59.357679 env[1711]: time="2024-02-09T18:58:59.357629650Z" level=info msg="CreateContainer within sandbox \"4e3de7765821a60abee978b087bb9f65722b1a30449b6fa9f70bdeabe9e145aa\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d2137beb9ce85788fbeaadde9cfb6156dc28ae567a0903576abfe2818bd2bd39\"" Feb 9 18:58:59.358225 env[1711]: time="2024-02-09T18:58:59.358183331Z" level=info msg="StartContainer for \"d2137beb9ce85788fbeaadde9cfb6156dc28ae567a0903576abfe2818bd2bd39\"" Feb 9 18:58:59.430719 env[1711]: time="2024-02-09T18:58:59.430672333Z" level=info msg="StartContainer for \"d2137beb9ce85788fbeaadde9cfb6156dc28ae567a0903576abfe2818bd2bd39\" returns successfully" Feb 9 18:58:59.718145 kubelet[2182]: I0209 18:58:59.718111 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.223372018136694e+09 pod.CreationTimestamp="2024-02-09 18:58:41 +0000 UTC" firstStartedPulling="2024-02-09 18:58:58.996893296 +0000 UTC m=+70.430642513" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:58:59.717798728 +0000 UTC m=+71.151547960" watchObservedRunningTime="2024-02-09 18:58:59.718081649 +0000 UTC m=+71.151830878" Feb 9 18:59:00.294873 kubelet[2182]: E0209 18:59:00.294820 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:00.526725 systemd-networkd[1515]: lxc082b6c11c993: Gained IPv6LL Feb 9 18:59:01.295089 kubelet[2182]: E0209 18:59:01.294996 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:02.295373 kubelet[2182]: E0209 18:59:02.295320 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:03.296328 kubelet[2182]: E0209 18:59:03.296276 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:04.296453 kubelet[2182]: E0209 18:59:04.296395 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:05.297138 kubelet[2182]: E0209 18:59:05.297079 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:05.776377 systemd[1]: run-containerd-runc-k8s.io-373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160-runc.FgIRT8.mount: Deactivated successfully. Feb 9 18:59:05.801140 env[1711]: time="2024-02-09T18:59:05.800406896Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:59:05.807610 env[1711]: time="2024-02-09T18:59:05.807471441Z" level=info msg="StopContainer for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" with timeout 1 (s)" Feb 9 18:59:05.807885 env[1711]: time="2024-02-09T18:59:05.807856691Z" level=info msg="Stop container \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" with signal terminated" Feb 9 18:59:05.818036 systemd-networkd[1515]: lxc_health: Link DOWN Feb 9 18:59:05.818046 systemd-networkd[1515]: lxc_health: Lost carrier Feb 9 18:59:05.954587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160-rootfs.mount: Deactivated successfully. Feb 9 18:59:06.150565 env[1711]: time="2024-02-09T18:59:06.150431107Z" level=info msg="shim disconnected" id=373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160 Feb 9 18:59:06.160498 env[1711]: time="2024-02-09T18:59:06.150581841Z" level=warning msg="cleaning up after shim disconnected" id=373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160 namespace=k8s.io Feb 9 18:59:06.160498 env[1711]: time="2024-02-09T18:59:06.150598893Z" level=info msg="cleaning up dead shim" Feb 9 18:59:06.190117 env[1711]: time="2024-02-09T18:59:06.190060240Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3962 runtime=io.containerd.runc.v2\n" Feb 9 18:59:06.192937 env[1711]: time="2024-02-09T18:59:06.192870796Z" level=info msg="StopContainer for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" returns successfully" Feb 9 18:59:06.194273 env[1711]: time="2024-02-09T18:59:06.194187690Z" level=info msg="StopPodSandbox for \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\"" Feb 9 18:59:06.194515 env[1711]: time="2024-02-09T18:59:06.194384031Z" level=info msg="Container to stop \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:06.194515 env[1711]: time="2024-02-09T18:59:06.194406633Z" level=info msg="Container to stop \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:06.194515 env[1711]: time="2024-02-09T18:59:06.194425759Z" level=info msg="Container to stop \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:06.194515 env[1711]: time="2024-02-09T18:59:06.194451719Z" level=info msg="Container to stop \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:06.194515 env[1711]: time="2024-02-09T18:59:06.194467270Z" level=info msg="Container to stop \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:06.197755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa-shm.mount: Deactivated successfully. Feb 9 18:59:06.242319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa-rootfs.mount: Deactivated successfully. Feb 9 18:59:06.255975 env[1711]: time="2024-02-09T18:59:06.255918011Z" level=info msg="shim disconnected" id=94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa Feb 9 18:59:06.256228 env[1711]: time="2024-02-09T18:59:06.255981084Z" level=warning msg="cleaning up after shim disconnected" id=94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa namespace=k8s.io Feb 9 18:59:06.256228 env[1711]: time="2024-02-09T18:59:06.255994932Z" level=info msg="cleaning up dead shim" Feb 9 18:59:06.270098 env[1711]: time="2024-02-09T18:59:06.269988653Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3996 runtime=io.containerd.runc.v2\n" Feb 9 18:59:06.274263 env[1711]: time="2024-02-09T18:59:06.274090664Z" level=info msg="TearDown network for sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" successfully" Feb 9 18:59:06.274408 env[1711]: time="2024-02-09T18:59:06.274260822Z" level=info msg="StopPodSandbox for \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" returns successfully" Feb 9 18:59:06.297824 kubelet[2182]: E0209 18:59:06.297781 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:06.374112 kubelet[2182]: I0209 18:59:06.374027 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-etc-cni-netd\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374302 kubelet[2182]: I0209 18:59:06.374167 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-bpf-maps\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374302 kubelet[2182]: I0209 18:59:06.374214 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-cgroup\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374302 kubelet[2182]: I0209 18:59:06.374239 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-xtables-lock\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374302 kubelet[2182]: I0209 18:59:06.374289 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/102dedc8-1b17-4155-95ee-fef965d1c1eb-clustermesh-secrets\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374632 kubelet[2182]: I0209 18:59:06.374320 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-config-path\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374632 kubelet[2182]: I0209 18:59:06.374389 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-hubble-tls\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374632 kubelet[2182]: I0209 18:59:06.374418 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-lib-modules\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374632 kubelet[2182]: I0209 18:59:06.374483 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-net\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374632 kubelet[2182]: I0209 18:59:06.374626 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-run\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374923 kubelet[2182]: I0209 18:59:06.374733 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg96p\" (UniqueName: \"kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-kube-api-access-tg96p\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374923 kubelet[2182]: I0209 18:59:06.374780 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cni-path\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374923 kubelet[2182]: I0209 18:59:06.374812 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-kernel\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374923 kubelet[2182]: I0209 18:59:06.374840 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-hostproc\") pod \"102dedc8-1b17-4155-95ee-fef965d1c1eb\" (UID: \"102dedc8-1b17-4155-95ee-fef965d1c1eb\") " Feb 9 18:59:06.374923 kubelet[2182]: I0209 18:59:06.374902 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.375365 kubelet[2182]: I0209 18:59:06.374959 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.375365 kubelet[2182]: I0209 18:59:06.374982 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.375365 kubelet[2182]: I0209 18:59:06.375022 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.375568 kubelet[2182]: I0209 18:59:06.375544 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.375680 kubelet[2182]: I0209 18:59:06.375662 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.376602 kubelet[2182]: W0209 18:59:06.375934 2182 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/102dedc8-1b17-4155-95ee-fef965d1c1eb/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:59:06.380155 kubelet[2182]: I0209 18:59:06.379992 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:59:06.380356 kubelet[2182]: I0209 18:59:06.376255 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.380571 kubelet[2182]: I0209 18:59:06.376567 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.380750 kubelet[2182]: I0209 18:59:06.380729 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.380901 kubelet[2182]: I0209 18:59:06.380883 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:06.381222 kubelet[2182]: I0209 18:59:06.381202 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:06.385077 kubelet[2182]: I0209 18:59:06.385051 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-kube-api-access-tg96p" (OuterVolumeSpecName: "kube-api-access-tg96p") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "kube-api-access-tg96p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:06.385163 kubelet[2182]: I0209 18:59:06.385105 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102dedc8-1b17-4155-95ee-fef965d1c1eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "102dedc8-1b17-4155-95ee-fef965d1c1eb" (UID: "102dedc8-1b17-4155-95ee-fef965d1c1eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475182 2182 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-kernel\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475235 2182 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-hostproc\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475256 2182 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tg96p\" (UniqueName: \"kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-kube-api-access-tg96p\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475270 2182 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cni-path\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475291 2182 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-bpf-maps\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475304 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-cgroup\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475317 2182 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-etc-cni-netd\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475471 kubelet[2182]: I0209 18:59:06.475333 2182 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/102dedc8-1b17-4155-95ee-fef965d1c1eb-clustermesh-secrets\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475939 kubelet[2182]: I0209 18:59:06.475347 2182 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-xtables-lock\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475939 kubelet[2182]: I0209 18:59:06.475360 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-run\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475939 kubelet[2182]: I0209 18:59:06.475373 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/102dedc8-1b17-4155-95ee-fef965d1c1eb-cilium-config-path\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475939 kubelet[2182]: I0209 18:59:06.475386 2182 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/102dedc8-1b17-4155-95ee-fef965d1c1eb-hubble-tls\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475939 kubelet[2182]: I0209 18:59:06.475401 2182 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-lib-modules\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.475939 kubelet[2182]: I0209 18:59:06.475417 2182 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/102dedc8-1b17-4155-95ee-fef965d1c1eb-host-proc-sys-net\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:06.716254 kubelet[2182]: I0209 18:59:06.716229 2182 scope.go:115] "RemoveContainer" containerID="373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160" Feb 9 18:59:06.718902 env[1711]: time="2024-02-09T18:59:06.718860231Z" level=info msg="RemoveContainer for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\"" Feb 9 18:59:06.734833 env[1711]: time="2024-02-09T18:59:06.734731845Z" level=info msg="RemoveContainer for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" returns successfully" Feb 9 18:59:06.735282 kubelet[2182]: I0209 18:59:06.735256 2182 scope.go:115] "RemoveContainer" containerID="9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b" Feb 9 18:59:06.736686 env[1711]: time="2024-02-09T18:59:06.736650197Z" level=info msg="RemoveContainer for \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\"" Feb 9 18:59:06.740712 env[1711]: time="2024-02-09T18:59:06.740678157Z" level=info msg="RemoveContainer for \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\" returns successfully" Feb 9 18:59:06.741992 kubelet[2182]: I0209 18:59:06.741125 2182 scope.go:115] "RemoveContainer" containerID="dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252" Feb 9 18:59:06.742578 env[1711]: time="2024-02-09T18:59:06.742549983Z" level=info msg="RemoveContainer for \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\"" Feb 9 18:59:06.746520 env[1711]: time="2024-02-09T18:59:06.746485860Z" level=info msg="RemoveContainer for \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\" returns successfully" Feb 9 18:59:06.746675 kubelet[2182]: I0209 18:59:06.746651 2182 scope.go:115] "RemoveContainer" containerID="6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74" Feb 9 18:59:06.748321 env[1711]: time="2024-02-09T18:59:06.748291628Z" level=info msg="RemoveContainer for \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\"" Feb 9 18:59:06.751823 env[1711]: time="2024-02-09T18:59:06.751792026Z" level=info msg="RemoveContainer for \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\" returns successfully" Feb 9 18:59:06.752052 kubelet[2182]: I0209 18:59:06.752030 2182 scope.go:115] "RemoveContainer" containerID="2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0" Feb 9 18:59:06.753170 env[1711]: time="2024-02-09T18:59:06.753143736Z" level=info msg="RemoveContainer for \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\"" Feb 9 18:59:06.756816 env[1711]: time="2024-02-09T18:59:06.756783264Z" level=info msg="RemoveContainer for \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\" returns successfully" Feb 9 18:59:06.756967 kubelet[2182]: I0209 18:59:06.756944 2182 scope.go:115] "RemoveContainer" containerID="373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160" Feb 9 18:59:06.757241 env[1711]: time="2024-02-09T18:59:06.757175564Z" level=error msg="ContainerStatus for \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\": not found" Feb 9 18:59:06.757372 kubelet[2182]: E0209 18:59:06.757353 2182 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\": not found" containerID="373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160" Feb 9 18:59:06.757470 kubelet[2182]: I0209 18:59:06.757394 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160} err="failed to get container status \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\": rpc error: code = NotFound desc = an error occurred when try to find container \"373007ae884bd12da8d9903b8d65e7e62cfe4ebb0936e8e493aab822f43e5160\": not found" Feb 9 18:59:06.757470 kubelet[2182]: I0209 18:59:06.757412 2182 scope.go:115] "RemoveContainer" containerID="9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b" Feb 9 18:59:06.757712 env[1711]: time="2024-02-09T18:59:06.757656814Z" level=error msg="ContainerStatus for \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\": not found" Feb 9 18:59:06.757913 kubelet[2182]: E0209 18:59:06.757884 2182 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\": not found" containerID="9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b" Feb 9 18:59:06.757979 kubelet[2182]: I0209 18:59:06.757930 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b} err="failed to get container status \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d08acc485b1c1d02b171b0f205895ff876c9224ce59b5e3ac6d4c31247a827b\": not found" Feb 9 18:59:06.757979 kubelet[2182]: I0209 18:59:06.757945 2182 scope.go:115] "RemoveContainer" containerID="dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252" Feb 9 18:59:06.758165 env[1711]: time="2024-02-09T18:59:06.758110669Z" level=error msg="ContainerStatus for \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\": not found" Feb 9 18:59:06.758263 kubelet[2182]: E0209 18:59:06.758241 2182 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\": not found" containerID="dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252" Feb 9 18:59:06.758342 kubelet[2182]: I0209 18:59:06.758279 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252} err="failed to get container status \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbcefb553252d1bc608b4d48ca24c3f41d939ea356b663f90fdad29fc50dd252\": not found" Feb 9 18:59:06.758342 kubelet[2182]: I0209 18:59:06.758293 2182 scope.go:115] "RemoveContainer" containerID="6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74" Feb 9 18:59:06.758616 env[1711]: time="2024-02-09T18:59:06.758554621Z" level=error msg="ContainerStatus for \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\": not found" Feb 9 18:59:06.758716 kubelet[2182]: E0209 18:59:06.758701 2182 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\": not found" containerID="6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74" Feb 9 18:59:06.758772 kubelet[2182]: I0209 18:59:06.758735 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74} err="failed to get container status \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b79ddf3c31f27020953540cfef0adbed5d4d6580720d02aa8a31aae57938b74\": not found" Feb 9 18:59:06.758772 kubelet[2182]: I0209 18:59:06.758748 2182 scope.go:115] "RemoveContainer" containerID="2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0" Feb 9 18:59:06.759012 env[1711]: time="2024-02-09T18:59:06.758953608Z" level=error msg="ContainerStatus for \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\": not found" Feb 9 18:59:06.759115 kubelet[2182]: E0209 18:59:06.759101 2182 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\": not found" containerID="2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0" Feb 9 18:59:06.759192 kubelet[2182]: I0209 18:59:06.759131 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0} err="failed to get container status \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\": rpc error: code = NotFound desc = an error occurred when try to find container \"2633bf2f164cdcb43c0c8adc1c0b923d1906960f6b4d3aa5e7fbb8e5a9b982e0\": not found" Feb 9 18:59:06.770727 systemd[1]: var-lib-kubelet-pods-102dedc8\x2d1b17\x2d4155\x2d95ee\x2dfef965d1c1eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtg96p.mount: Deactivated successfully. Feb 9 18:59:06.770925 systemd[1]: var-lib-kubelet-pods-102dedc8\x2d1b17\x2d4155\x2d95ee\x2dfef965d1c1eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:59:06.771060 systemd[1]: var-lib-kubelet-pods-102dedc8\x2d1b17\x2d4155\x2d95ee\x2dfef965d1c1eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:59:07.297972 kubelet[2182]: E0209 18:59:07.297930 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:07.516994 kubelet[2182]: I0209 18:59:07.516964 2182 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=102dedc8-1b17-4155-95ee-fef965d1c1eb path="/var/lib/kubelet/pods/102dedc8-1b17-4155-95ee-fef965d1c1eb/volumes" Feb 9 18:59:08.298286 kubelet[2182]: E0209 18:59:08.298233 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:09.239821 kubelet[2182]: E0209 18:59:09.239762 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:09.299189 kubelet[2182]: E0209 18:59:09.299150 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:09.392039 kubelet[2182]: E0209 18:59:09.392006 2182 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:59:09.564682 kubelet[2182]: I0209 18:59:09.564043 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:59:09.564682 kubelet[2182]: E0209 18:59:09.564148 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="102dedc8-1b17-4155-95ee-fef965d1c1eb" containerName="mount-cgroup" Feb 9 18:59:09.564682 kubelet[2182]: E0209 18:59:09.564161 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="102dedc8-1b17-4155-95ee-fef965d1c1eb" containerName="mount-bpf-fs" Feb 9 18:59:09.564682 kubelet[2182]: E0209 18:59:09.564212 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="102dedc8-1b17-4155-95ee-fef965d1c1eb" containerName="clean-cilium-state" Feb 9 18:59:09.564682 kubelet[2182]: E0209 18:59:09.564227 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="102dedc8-1b17-4155-95ee-fef965d1c1eb" containerName="apply-sysctl-overwrites" Feb 9 18:59:09.564682 kubelet[2182]: E0209 18:59:09.564236 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="102dedc8-1b17-4155-95ee-fef965d1c1eb" containerName="cilium-agent" Feb 9 18:59:09.564682 kubelet[2182]: I0209 18:59:09.564295 2182 memory_manager.go:346] "RemoveStaleState removing state" podUID="102dedc8-1b17-4155-95ee-fef965d1c1eb" containerName="cilium-agent" Feb 9 18:59:09.651520 kubelet[2182]: I0209 18:59:09.651488 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:59:09.693546 kubelet[2182]: I0209 18:59:09.693511 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ba0c4b5-2ac1-45e1-a2be-472067a34ef2-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-9wxsz\" (UID: \"5ba0c4b5-2ac1-45e1-a2be-472067a34ef2\") " pod="kube-system/cilium-operator-f59cbd8c6-9wxsz" Feb 9 18:59:09.693728 kubelet[2182]: I0209 18:59:09.693566 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpwmt\" (UniqueName: \"kubernetes.io/projected/5ba0c4b5-2ac1-45e1-a2be-472067a34ef2-kube-api-access-xpwmt\") pod \"cilium-operator-f59cbd8c6-9wxsz\" (UID: \"5ba0c4b5-2ac1-45e1-a2be-472067a34ef2\") " pod="kube-system/cilium-operator-f59cbd8c6-9wxsz" Feb 9 18:59:09.794305 kubelet[2182]: I0209 18:59:09.794270 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-hostproc\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794534 kubelet[2182]: I0209 18:59:09.794322 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cni-path\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794534 kubelet[2182]: I0209 18:59:09.794350 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-etc-cni-netd\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794534 kubelet[2182]: I0209 18:59:09.794391 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-net\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794534 kubelet[2182]: I0209 18:59:09.794423 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-cgroup\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794534 kubelet[2182]: I0209 18:59:09.794479 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-clustermesh-secrets\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794771 kubelet[2182]: I0209 18:59:09.794547 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-run\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794771 kubelet[2182]: I0209 18:59:09.794580 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-config-path\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794771 kubelet[2182]: I0209 18:59:09.794622 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-ipsec-secrets\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794771 kubelet[2182]: I0209 18:59:09.794656 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-hubble-tls\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794771 kubelet[2182]: I0209 18:59:09.794690 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkkxc\" (UniqueName: \"kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-kube-api-access-dkkxc\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.794771 kubelet[2182]: I0209 18:59:09.794723 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-bpf-maps\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.795020 kubelet[2182]: I0209 18:59:09.794754 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-lib-modules\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.795020 kubelet[2182]: I0209 18:59:09.794786 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-xtables-lock\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.795020 kubelet[2182]: I0209 18:59:09.794818 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-kernel\") pod \"cilium-bcwz4\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " pod="kube-system/cilium-bcwz4" Feb 9 18:59:09.868328 env[1711]: time="2024-02-09T18:59:09.868287543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9wxsz,Uid:5ba0c4b5-2ac1-45e1-a2be-472067a34ef2,Namespace:kube-system,Attempt:0,}" Feb 9 18:59:09.891705 env[1711]: time="2024-02-09T18:59:09.891633152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:09.891705 env[1711]: time="2024-02-09T18:59:09.891669562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:09.891977 env[1711]: time="2024-02-09T18:59:09.891686103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:09.891977 env[1711]: time="2024-02-09T18:59:09.891824307Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd038ecfe0141da189f1180e7be3fd701e53a339891c739e70be61d5c31f128 pid=4021 runtime=io.containerd.runc.v2 Feb 9 18:59:09.956148 env[1711]: time="2024-02-09T18:59:09.955774832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bcwz4,Uid:9200a38c-77fc-468e-917b-6c6090eea7f3,Namespace:kube-system,Attempt:0,}" Feb 9 18:59:09.987926 env[1711]: time="2024-02-09T18:59:09.987423077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:09.987926 env[1711]: time="2024-02-09T18:59:09.987498332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:09.987926 env[1711]: time="2024-02-09T18:59:09.987519878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:09.987926 env[1711]: time="2024-02-09T18:59:09.987672749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7 pid=4061 runtime=io.containerd.runc.v2 Feb 9 18:59:09.989762 env[1711]: time="2024-02-09T18:59:09.989714334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-9wxsz,Uid:5ba0c4b5-2ac1-45e1-a2be-472067a34ef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cd038ecfe0141da189f1180e7be3fd701e53a339891c739e70be61d5c31f128\"" Feb 9 18:59:09.991961 env[1711]: time="2024-02-09T18:59:09.991918602Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:59:10.042948 env[1711]: time="2024-02-09T18:59:10.042906612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bcwz4,Uid:9200a38c-77fc-468e-917b-6c6090eea7f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\"" Feb 9 18:59:10.046113 env[1711]: time="2024-02-09T18:59:10.046067098Z" level=info msg="CreateContainer within sandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:59:10.063147 env[1711]: time="2024-02-09T18:59:10.063105678Z" level=info msg="CreateContainer within sandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2\"" Feb 9 18:59:10.063767 env[1711]: time="2024-02-09T18:59:10.063732771Z" level=info msg="StartContainer for \"9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2\"" Feb 9 18:59:10.127269 env[1711]: time="2024-02-09T18:59:10.127158054Z" level=info msg="StartContainer for \"9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2\" returns successfully" Feb 9 18:59:10.192121 env[1711]: time="2024-02-09T18:59:10.191930718Z" level=info msg="shim disconnected" id=9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2 Feb 9 18:59:10.192121 env[1711]: time="2024-02-09T18:59:10.192116373Z" level=warning msg="cleaning up after shim disconnected" id=9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2 namespace=k8s.io Feb 9 18:59:10.192435 env[1711]: time="2024-02-09T18:59:10.192132764Z" level=info msg="cleaning up dead shim" Feb 9 18:59:10.207419 env[1711]: time="2024-02-09T18:59:10.207364865Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4148 runtime=io.containerd.runc.v2\n" Feb 9 18:59:10.299646 kubelet[2182]: E0209 18:59:10.299587 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:10.726540 env[1711]: time="2024-02-09T18:59:10.726502258Z" level=info msg="StopPodSandbox for \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\"" Feb 9 18:59:10.726803 env[1711]: time="2024-02-09T18:59:10.726776729Z" level=info msg="Container to stop \"9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:59:10.774363 env[1711]: time="2024-02-09T18:59:10.774300917Z" level=info msg="shim disconnected" id=b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7 Feb 9 18:59:10.774663 env[1711]: time="2024-02-09T18:59:10.774641041Z" level=warning msg="cleaning up after shim disconnected" id=b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7 namespace=k8s.io Feb 9 18:59:10.774841 env[1711]: time="2024-02-09T18:59:10.774823743Z" level=info msg="cleaning up dead shim" Feb 9 18:59:10.786506 env[1711]: time="2024-02-09T18:59:10.786433995Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Feb 9 18:59:10.786972 env[1711]: time="2024-02-09T18:59:10.786942660Z" level=info msg="TearDown network for sandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" successfully" Feb 9 18:59:10.787082 env[1711]: time="2024-02-09T18:59:10.787057315Z" level=info msg="StopPodSandbox for \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" returns successfully" Feb 9 18:59:10.912793 kubelet[2182]: I0209 18:59:10.912743 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-hostproc\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.912793 kubelet[2182]: I0209 18:59:10.912801 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-net\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913169 kubelet[2182]: I0209 18:59:10.912836 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-clustermesh-secrets\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913169 kubelet[2182]: I0209 18:59:10.912862 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-ipsec-secrets\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913169 kubelet[2182]: I0209 18:59:10.912894 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkkxc\" (UniqueName: \"kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-kube-api-access-dkkxc\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913169 kubelet[2182]: I0209 18:59:10.912919 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-kernel\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913169 kubelet[2182]: I0209 18:59:10.912946 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-lib-modules\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913169 kubelet[2182]: I0209 18:59:10.912969 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-xtables-lock\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913426 kubelet[2182]: I0209 18:59:10.913090 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-bpf-maps\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913426 kubelet[2182]: I0209 18:59:10.913125 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-cgroup\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913426 kubelet[2182]: I0209 18:59:10.913153 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-etc-cni-netd\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913426 kubelet[2182]: I0209 18:59:10.913182 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-hubble-tls\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913426 kubelet[2182]: I0209 18:59:10.913212 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-run\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913426 kubelet[2182]: I0209 18:59:10.913238 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cni-path\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913711 kubelet[2182]: I0209 18:59:10.913269 2182 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-config-path\") pod \"9200a38c-77fc-468e-917b-6c6090eea7f3\" (UID: \"9200a38c-77fc-468e-917b-6c6090eea7f3\") " Feb 9 18:59:10.913711 kubelet[2182]: W0209 18:59:10.913529 2182 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9200a38c-77fc-468e-917b-6c6090eea7f3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:59:10.920489 kubelet[2182]: I0209 18:59:10.913846 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920489 kubelet[2182]: I0209 18:59:10.913906 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920489 kubelet[2182]: I0209 18:59:10.913954 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920489 kubelet[2182]: I0209 18:59:10.920218 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:59:10.920489 kubelet[2182]: I0209 18:59:10.920285 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920837 kubelet[2182]: I0209 18:59:10.920312 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920837 kubelet[2182]: I0209 18:59:10.920338 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920837 kubelet[2182]: I0209 18:59:10.920362 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920837 kubelet[2182]: I0209 18:59:10.920758 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.920837 kubelet[2182]: I0209 18:59:10.920792 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.921076 kubelet[2182]: I0209 18:59:10.920994 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:59:10.930750 systemd[1]: var-lib-kubelet-pods-9200a38c\x2d77fc\x2d468e\x2d917b\x2d6c6090eea7f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:59:10.935754 kubelet[2182]: I0209 18:59:10.935712 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:59:10.943462 systemd[1]: var-lib-kubelet-pods-9200a38c\x2d77fc\x2d468e\x2d917b\x2d6c6090eea7f3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:59:10.947317 kubelet[2182]: I0209 18:59:10.947279 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:59:10.950906 systemd[1]: var-lib-kubelet-pods-9200a38c\x2d77fc\x2d468e\x2d917b\x2d6c6090eea7f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:59:10.955108 systemd[1]: var-lib-kubelet-pods-9200a38c\x2d77fc\x2d468e\x2d917b\x2d6c6090eea7f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddkkxc.mount: Deactivated successfully. Feb 9 18:59:10.956021 kubelet[2182]: I0209 18:59:10.955970 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-kube-api-access-dkkxc" (OuterVolumeSpecName: "kube-api-access-dkkxc") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "kube-api-access-dkkxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:10.956579 kubelet[2182]: I0209 18:59:10.956540 2182 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9200a38c-77fc-468e-917b-6c6090eea7f3" (UID: "9200a38c-77fc-468e-917b-6c6090eea7f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013763 2182 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cni-path\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013805 2182 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-hubble-tls\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013818 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-run\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013835 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-config-path\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013851 2182 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-kernel\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013866 2182 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-hostproc\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013879 2182 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-host-proc-sys-net\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014031 kubelet[2182]: I0209 18:59:11.013892 2182 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-clustermesh-secrets\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013904 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-ipsec-secrets\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013920 2182 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-dkkxc\" (UniqueName: \"kubernetes.io/projected/9200a38c-77fc-468e-917b-6c6090eea7f3-kube-api-access-dkkxc\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013936 2182 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-lib-modules\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013948 2182 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-bpf-maps\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013960 2182 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-xtables-lock\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013975 2182 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-etc-cni-netd\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.014837 kubelet[2182]: I0209 18:59:11.013990 2182 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9200a38c-77fc-468e-917b-6c6090eea7f3-cilium-cgroup\") on node \"172.31.21.130\" DevicePath \"\"" Feb 9 18:59:11.299890 kubelet[2182]: E0209 18:59:11.299789 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:11.750849 kubelet[2182]: I0209 18:59:11.750813 2182 scope.go:115] "RemoveContainer" containerID="9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2" Feb 9 18:59:11.753666 env[1711]: time="2024-02-09T18:59:11.753624754Z" level=info msg="RemoveContainer for \"9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2\"" Feb 9 18:59:11.758140 env[1711]: time="2024-02-09T18:59:11.758101177Z" level=info msg="RemoveContainer for \"9dc1a464e46b5384bab7a63d99675d18f004f89db4f5f6e8c95bede806961eb2\" returns successfully" Feb 9 18:59:11.793578 kubelet[2182]: I0209 18:59:11.793540 2182 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:59:11.793864 kubelet[2182]: E0209 18:59:11.793704 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9200a38c-77fc-468e-917b-6c6090eea7f3" containerName="mount-cgroup" Feb 9 18:59:11.793864 kubelet[2182]: I0209 18:59:11.793741 2182 memory_manager.go:346] "RemoveStaleState removing state" podUID="9200a38c-77fc-468e-917b-6c6090eea7f3" containerName="mount-cgroup" Feb 9 18:59:11.919411 kubelet[2182]: I0209 18:59:11.919376 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-bpf-maps\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919411 kubelet[2182]: I0209 18:59:11.919429 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-hostproc\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919765 kubelet[2182]: I0209 18:59:11.919470 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-cilium-cgroup\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919765 kubelet[2182]: I0209 18:59:11.919496 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-lib-modules\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919765 kubelet[2182]: I0209 18:59:11.919525 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1037e835-8a29-4076-92fc-18947fead706-cilium-config-path\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919765 kubelet[2182]: I0209 18:59:11.919552 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-cni-path\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919765 kubelet[2182]: I0209 18:59:11.919580 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1037e835-8a29-4076-92fc-18947fead706-cilium-ipsec-secrets\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.919765 kubelet[2182]: I0209 18:59:11.919607 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-cilium-run\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920135 kubelet[2182]: I0209 18:59:11.919730 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-xtables-lock\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920135 kubelet[2182]: I0209 18:59:11.919826 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1037e835-8a29-4076-92fc-18947fead706-clustermesh-secrets\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920135 kubelet[2182]: I0209 18:59:11.919859 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-host-proc-sys-net\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920135 kubelet[2182]: I0209 18:59:11.919891 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1037e835-8a29-4076-92fc-18947fead706-hubble-tls\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920135 kubelet[2182]: I0209 18:59:11.919925 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chc96\" (UniqueName: \"kubernetes.io/projected/1037e835-8a29-4076-92fc-18947fead706-kube-api-access-chc96\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920481 kubelet[2182]: I0209 18:59:11.919957 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-host-proc-sys-kernel\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:11.920481 kubelet[2182]: I0209 18:59:11.920075 2182 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1037e835-8a29-4076-92fc-18947fead706-etc-cni-netd\") pod \"cilium-ptcwl\" (UID: \"1037e835-8a29-4076-92fc-18947fead706\") " pod="kube-system/cilium-ptcwl" Feb 9 18:59:12.300540 kubelet[2182]: E0209 18:59:12.300480 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:12.399643 env[1711]: time="2024-02-09T18:59:12.399601379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ptcwl,Uid:1037e835-8a29-4076-92fc-18947fead706,Namespace:kube-system,Attempt:0,}" Feb 9 18:59:12.432432 env[1711]: time="2024-02-09T18:59:12.432354452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:59:12.432773 env[1711]: time="2024-02-09T18:59:12.432732458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:59:12.432901 env[1711]: time="2024-02-09T18:59:12.432879032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:59:12.433339 env[1711]: time="2024-02-09T18:59:12.433288504Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80 pid=4216 runtime=io.containerd.runc.v2 Feb 9 18:59:12.516773 env[1711]: time="2024-02-09T18:59:12.516724717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ptcwl,Uid:1037e835-8a29-4076-92fc-18947fead706,Namespace:kube-system,Attempt:0,} returns sandbox id \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\"" Feb 9 18:59:12.519893 env[1711]: time="2024-02-09T18:59:12.519848309Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:59:12.556767 env[1711]: time="2024-02-09T18:59:12.556641154Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d480093ca6213aadb81c515acd0f45f34f3e00ed4facbf5a5026fdd4a704faf\"" Feb 9 18:59:12.559263 env[1711]: time="2024-02-09T18:59:12.559115836Z" level=info msg="StartContainer for \"4d480093ca6213aadb81c515acd0f45f34f3e00ed4facbf5a5026fdd4a704faf\"" Feb 9 18:59:12.651106 env[1711]: time="2024-02-09T18:59:12.651047948Z" level=info msg="StartContainer for \"4d480093ca6213aadb81c515acd0f45f34f3e00ed4facbf5a5026fdd4a704faf\" returns successfully" Feb 9 18:59:12.760452 env[1711]: time="2024-02-09T18:59:12.760387618Z" level=info msg="shim disconnected" id=4d480093ca6213aadb81c515acd0f45f34f3e00ed4facbf5a5026fdd4a704faf Feb 9 18:59:12.761005 env[1711]: time="2024-02-09T18:59:12.760522787Z" level=warning msg="cleaning up after shim disconnected" id=4d480093ca6213aadb81c515acd0f45f34f3e00ed4facbf5a5026fdd4a704faf namespace=k8s.io Feb 9 18:59:12.761005 env[1711]: time="2024-02-09T18:59:12.760539110Z" level=info msg="cleaning up dead shim" Feb 9 18:59:12.771427 env[1711]: time="2024-02-09T18:59:12.771377089Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4300 runtime=io.containerd.runc.v2\n" Feb 9 18:59:13.178687 env[1711]: time="2024-02-09T18:59:13.178632154Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:13.181124 env[1711]: time="2024-02-09T18:59:13.181088014Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:13.183402 env[1711]: time="2024-02-09T18:59:13.183374664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:59:13.183961 env[1711]: time="2024-02-09T18:59:13.183933347Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 18:59:13.188139 env[1711]: time="2024-02-09T18:59:13.188101065Z" level=info msg="CreateContainer within sandbox \"1cd038ecfe0141da189f1180e7be3fd701e53a339891c739e70be61d5c31f128\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:59:13.208458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730039771.mount: Deactivated successfully. Feb 9 18:59:13.220225 env[1711]: time="2024-02-09T18:59:13.220174357Z" level=info msg="CreateContainer within sandbox \"1cd038ecfe0141da189f1180e7be3fd701e53a339891c739e70be61d5c31f128\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c577cc2f63d1ac62ae8846c0a3f85abc22ca024e1dc56d65ac3dea2dac728bc\"" Feb 9 18:59:13.221008 env[1711]: time="2024-02-09T18:59:13.220967788Z" level=info msg="StartContainer for \"3c577cc2f63d1ac62ae8846c0a3f85abc22ca024e1dc56d65ac3dea2dac728bc\"" Feb 9 18:59:13.294700 env[1711]: time="2024-02-09T18:59:13.294645377Z" level=info msg="StartContainer for \"3c577cc2f63d1ac62ae8846c0a3f85abc22ca024e1dc56d65ac3dea2dac728bc\" returns successfully" Feb 9 18:59:13.302120 kubelet[2182]: E0209 18:59:13.302085 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:13.518050 kubelet[2182]: I0209 18:59:13.517586 2182 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9200a38c-77fc-468e-917b-6c6090eea7f3 path="/var/lib/kubelet/pods/9200a38c-77fc-468e-917b-6c6090eea7f3/volumes" Feb 9 18:59:13.765741 env[1711]: time="2024-02-09T18:59:13.765525093Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:59:13.792077 env[1711]: time="2024-02-09T18:59:13.791869381Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0\"" Feb 9 18:59:13.792756 env[1711]: time="2024-02-09T18:59:13.792691468Z" level=info msg="StartContainer for \"49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0\"" Feb 9 18:59:13.796058 kubelet[2182]: I0209 18:59:13.796032 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-9wxsz" podStartSLOduration=-9.223372032058807e+09 pod.CreationTimestamp="2024-02-09 18:59:09 +0000 UTC" firstStartedPulling="2024-02-09 18:59:09.991403983 +0000 UTC m=+81.425153193" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:59:13.794083 +0000 UTC m=+85.227832229" watchObservedRunningTime="2024-02-09 18:59:13.795968091 +0000 UTC m=+85.229717320" Feb 9 18:59:13.831627 systemd[1]: run-containerd-runc-k8s.io-49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0-runc.94kV1w.mount: Deactivated successfully. Feb 9 18:59:13.894566 env[1711]: time="2024-02-09T18:59:13.894353712Z" level=info msg="StartContainer for \"49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0\" returns successfully" Feb 9 18:59:13.909520 kubelet[2182]: I0209 18:59:13.907042 2182 setters.go:548] "Node became not ready" node="172.31.21.130" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:59:13.906968862 +0000 UTC m=+85.340718097 LastTransitionTime:2024-02-09 18:59:13.906968862 +0000 UTC m=+85.340718097 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:59:13.937049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0-rootfs.mount: Deactivated successfully. Feb 9 18:59:13.954857 env[1711]: time="2024-02-09T18:59:13.954800450Z" level=info msg="shim disconnected" id=49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0 Feb 9 18:59:13.954857 env[1711]: time="2024-02-09T18:59:13.954856272Z" level=warning msg="cleaning up after shim disconnected" id=49f071052db0879ac0381695baffd125b91ffba871fe3239718e1a68051ab6a0 namespace=k8s.io Feb 9 18:59:13.955164 env[1711]: time="2024-02-09T18:59:13.954868270Z" level=info msg="cleaning up dead shim" Feb 9 18:59:13.966165 env[1711]: time="2024-02-09T18:59:13.966118381Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4403 runtime=io.containerd.runc.v2\n" Feb 9 18:59:14.302639 kubelet[2182]: E0209 18:59:14.302581 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:14.393523 kubelet[2182]: E0209 18:59:14.393484 2182 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:59:14.772931 env[1711]: time="2024-02-09T18:59:14.772889654Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:59:14.800414 env[1711]: time="2024-02-09T18:59:14.800364360Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9\"" Feb 9 18:59:14.801044 env[1711]: time="2024-02-09T18:59:14.800949993Z" level=info msg="StartContainer for \"d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9\"" Feb 9 18:59:14.841955 systemd[1]: run-containerd-runc-k8s.io-d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9-runc.4rcQ1p.mount: Deactivated successfully. Feb 9 18:59:14.884220 env[1711]: time="2024-02-09T18:59:14.884172717Z" level=info msg="StartContainer for \"d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9\" returns successfully" Feb 9 18:59:14.918301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9-rootfs.mount: Deactivated successfully. Feb 9 18:59:14.929558 env[1711]: time="2024-02-09T18:59:14.929509468Z" level=info msg="shim disconnected" id=d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9 Feb 9 18:59:14.929558 env[1711]: time="2024-02-09T18:59:14.929558015Z" level=warning msg="cleaning up after shim disconnected" id=d8316e882d8279f0fe495d260c71cffed5a346ab38ad191c57190b14d72358c9 namespace=k8s.io Feb 9 18:59:14.929558 env[1711]: time="2024-02-09T18:59:14.929570067Z" level=info msg="cleaning up dead shim" Feb 9 18:59:14.939492 env[1711]: time="2024-02-09T18:59:14.939418486Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4459 runtime=io.containerd.runc.v2\n" Feb 9 18:59:15.303669 kubelet[2182]: E0209 18:59:15.303615 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:15.778469 env[1711]: time="2024-02-09T18:59:15.777893552Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:59:15.799406 env[1711]: time="2024-02-09T18:59:15.799351978Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ccbfceaec61bd25f935ca88c911d263f54ceb63e775f01c4c9db96c040c1c313\"" Feb 9 18:59:15.800109 env[1711]: time="2024-02-09T18:59:15.800075655Z" level=info msg="StartContainer for \"ccbfceaec61bd25f935ca88c911d263f54ceb63e775f01c4c9db96c040c1c313\"" Feb 9 18:59:15.862653 env[1711]: time="2024-02-09T18:59:15.862606365Z" level=info msg="StartContainer for \"ccbfceaec61bd25f935ca88c911d263f54ceb63e775f01c4c9db96c040c1c313\" returns successfully" Feb 9 18:59:15.884835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccbfceaec61bd25f935ca88c911d263f54ceb63e775f01c4c9db96c040c1c313-rootfs.mount: Deactivated successfully. Feb 9 18:59:15.897632 env[1711]: time="2024-02-09T18:59:15.897483341Z" level=info msg="shim disconnected" id=ccbfceaec61bd25f935ca88c911d263f54ceb63e775f01c4c9db96c040c1c313 Feb 9 18:59:15.897632 env[1711]: time="2024-02-09T18:59:15.897627953Z" level=warning msg="cleaning up after shim disconnected" id=ccbfceaec61bd25f935ca88c911d263f54ceb63e775f01c4c9db96c040c1c313 namespace=k8s.io Feb 9 18:59:15.897932 env[1711]: time="2024-02-09T18:59:15.897650307Z" level=info msg="cleaning up dead shim" Feb 9 18:59:15.907151 env[1711]: time="2024-02-09T18:59:15.907101947Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:59:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4514 runtime=io.containerd.runc.v2\n" Feb 9 18:59:16.304308 kubelet[2182]: E0209 18:59:16.304267 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:16.783539 env[1711]: time="2024-02-09T18:59:16.783494260Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:59:16.806980 env[1711]: time="2024-02-09T18:59:16.806933719Z" level=info msg="CreateContainer within sandbox \"61a5215c6d13e0bf84272d14b97ced59494f8e2fe4680cdede59ff6a9a5ecc80\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"db9062c99845d109b4c453c5ef8db7ef8a08ea4d7293de9d06d8a64947907992\"" Feb 9 18:59:16.807923 env[1711]: time="2024-02-09T18:59:16.807891396Z" level=info msg="StartContainer for \"db9062c99845d109b4c453c5ef8db7ef8a08ea4d7293de9d06d8a64947907992\"" Feb 9 18:59:16.908417 env[1711]: time="2024-02-09T18:59:16.908353089Z" level=info msg="StartContainer for \"db9062c99845d109b4c453c5ef8db7ef8a08ea4d7293de9d06d8a64947907992\" returns successfully" Feb 9 18:59:16.950410 systemd[1]: run-containerd-runc-k8s.io-db9062c99845d109b4c453c5ef8db7ef8a08ea4d7293de9d06d8a64947907992-runc.wlMN0N.mount: Deactivated successfully. Feb 9 18:59:17.304938 kubelet[2182]: E0209 18:59:17.304840 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:17.661473 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 18:59:17.811331 kubelet[2182]: I0209 18:59:17.809294 2182 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ptcwl" podStartSLOduration=6.809239796 pod.CreationTimestamp="2024-02-09 18:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:59:17.808665947 +0000 UTC m=+89.242415177" watchObservedRunningTime="2024-02-09 18:59:17.809239796 +0000 UTC m=+89.242989025" Feb 9 18:59:18.305537 kubelet[2182]: E0209 18:59:18.305484 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:19.306496 kubelet[2182]: E0209 18:59:19.306377 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:20.307427 kubelet[2182]: E0209 18:59:20.307391 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:20.884172 (udev-worker)[5082]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:59:20.887663 (udev-worker)[5083]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:59:20.898284 systemd-networkd[1515]: lxc_health: Link UP Feb 9 18:59:20.919061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:59:20.915428 systemd-networkd[1515]: lxc_health: Gained carrier Feb 9 18:59:21.307863 kubelet[2182]: E0209 18:59:21.307723 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:22.308860 kubelet[2182]: E0209 18:59:22.308819 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:22.542590 systemd-networkd[1515]: lxc_health: Gained IPv6LL Feb 9 18:59:23.309860 kubelet[2182]: E0209 18:59:23.309795 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:23.802389 systemd[1]: run-containerd-runc-k8s.io-db9062c99845d109b4c453c5ef8db7ef8a08ea4d7293de9d06d8a64947907992-runc.Tqx93u.mount: Deactivated successfully. Feb 9 18:59:24.310882 kubelet[2182]: E0209 18:59:24.310801 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:25.311907 kubelet[2182]: E0209 18:59:25.311874 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:26.313725 kubelet[2182]: E0209 18:59:26.313591 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:27.318324 kubelet[2182]: E0209 18:59:27.318279 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:28.319374 kubelet[2182]: E0209 18:59:28.319323 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:29.240313 kubelet[2182]: E0209 18:59:29.240262 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:29.319883 kubelet[2182]: E0209 18:59:29.319832 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:30.320352 kubelet[2182]: E0209 18:59:30.320301 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:31.321469 kubelet[2182]: E0209 18:59:31.321406 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:32.321579 kubelet[2182]: E0209 18:59:32.321520 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:33.322181 kubelet[2182]: E0209 18:59:33.322137 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:34.322648 kubelet[2182]: E0209 18:59:34.322598 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:35.322752 kubelet[2182]: E0209 18:59:35.322694 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:36.323716 kubelet[2182]: E0209 18:59:36.323662 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:37.324544 kubelet[2182]: E0209 18:59:37.324497 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:38.325462 kubelet[2182]: E0209 18:59:38.325406 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:39.326355 kubelet[2182]: E0209 18:59:39.326314 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:40.326894 kubelet[2182]: E0209 18:59:40.326841 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:41.327316 kubelet[2182]: E0209 18:59:41.327263 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:42.327495 kubelet[2182]: E0209 18:59:42.327451 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:43.328379 kubelet[2182]: E0209 18:59:43.328327 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:44.154461 kubelet[2182]: E0209 18:59:44.154415 2182 controller.go:189] failed to update lease, error: the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.21.130) Feb 9 18:59:44.329131 kubelet[2182]: E0209 18:59:44.329084 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:45.330125 kubelet[2182]: E0209 18:59:45.330081 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:46.331261 kubelet[2182]: E0209 18:59:46.331207 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:47.332417 kubelet[2182]: E0209 18:59:47.332361 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:48.333533 kubelet[2182]: E0209 18:59:48.333495 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:49.240066 kubelet[2182]: E0209 18:59:49.240017 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:49.262418 env[1711]: time="2024-02-09T18:59:49.262367150Z" level=info msg="StopPodSandbox for \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\"" Feb 9 18:59:49.262868 env[1711]: time="2024-02-09T18:59:49.262518605Z" level=info msg="TearDown network for sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" successfully" Feb 9 18:59:49.262868 env[1711]: time="2024-02-09T18:59:49.262592159Z" level=info msg="StopPodSandbox for \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" returns successfully" Feb 9 18:59:49.263710 env[1711]: time="2024-02-09T18:59:49.263677216Z" level=info msg="RemovePodSandbox for \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\"" Feb 9 18:59:49.263831 env[1711]: time="2024-02-09T18:59:49.263711080Z" level=info msg="Forcibly stopping sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\"" Feb 9 18:59:49.263881 env[1711]: time="2024-02-09T18:59:49.263826314Z" level=info msg="TearDown network for sandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" successfully" Feb 9 18:59:49.269152 env[1711]: time="2024-02-09T18:59:49.269118730Z" level=info msg="RemovePodSandbox \"94884a4e529721888cfdec113ad6f3a7ae9fc89cc724ef359ca30148a0c190fa\" returns successfully" Feb 9 18:59:49.269618 env[1711]: time="2024-02-09T18:59:49.269590246Z" level=info msg="StopPodSandbox for \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\"" Feb 9 18:59:49.269722 env[1711]: time="2024-02-09T18:59:49.269670468Z" level=info msg="TearDown network for sandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" successfully" Feb 9 18:59:49.269722 env[1711]: time="2024-02-09T18:59:49.269712872Z" level=info msg="StopPodSandbox for \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" returns successfully" Feb 9 18:59:49.270112 env[1711]: time="2024-02-09T18:59:49.270082879Z" level=info msg="RemovePodSandbox for \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\"" Feb 9 18:59:49.270194 env[1711]: time="2024-02-09T18:59:49.270116375Z" level=info msg="Forcibly stopping sandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\"" Feb 9 18:59:49.270250 env[1711]: time="2024-02-09T18:59:49.270201420Z" level=info msg="TearDown network for sandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" successfully" Feb 9 18:59:49.274060 env[1711]: time="2024-02-09T18:59:49.274016129Z" level=info msg="RemovePodSandbox \"b743772be1e3a1d0edf90740d2f2f63e4ea6ae4150ae71cca3c11c72b8a58ab7\" returns successfully" Feb 9 18:59:49.334534 kubelet[2182]: E0209 18:59:49.334490 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:50.335394 kubelet[2182]: E0209 18:59:50.335181 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:51.336030 kubelet[2182]: E0209 18:59:51.335975 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:52.336318 kubelet[2182]: E0209 18:59:52.336267 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:53.336871 kubelet[2182]: E0209 18:59:53.336785 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:54.154870 kubelet[2182]: E0209 18:59:54.154816 2182 controller.go:189] failed to update lease, error: Put "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 18:59:54.337817 kubelet[2182]: E0209 18:59:54.337766 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:55.338260 kubelet[2182]: E0209 18:59:55.338208 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:56.338561 kubelet[2182]: E0209 18:59:56.338518 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:57.339563 kubelet[2182]: E0209 18:59:57.339506 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:58.340258 kubelet[2182]: E0209 18:59:58.340209 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:59:59.341200 kubelet[2182]: E0209 18:59:59.341149 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:00.341981 kubelet[2182]: E0209 19:00:00.341924 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:01.343069 kubelet[2182]: E0209 19:00:01.343016 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:02.343324 kubelet[2182]: E0209 19:00:02.343265 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:03.343469 kubelet[2182]: E0209 19:00:03.343403 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:04.155476 kubelet[2182]: E0209 19:00:04.155410 2182 controller.go:189] failed to update lease, error: Put "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:00:04.343867 kubelet[2182]: E0209 19:00:04.343811 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:05.344190 kubelet[2182]: E0209 19:00:05.344132 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:06.345004 kubelet[2182]: E0209 19:00:06.344953 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:07.345526 kubelet[2182]: E0209 19:00:07.345479 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:08.345648 kubelet[2182]: E0209 19:00:08.345592 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:09.053815 kubelet[2182]: E0209 19:00:09.053693 2182 controller.go:189] failed to update lease, error: Put "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": unexpected EOF Feb 9 19:00:09.065950 kubelet[2182]: E0209 19:00:09.065912 2182 controller.go:189] failed to update lease, error: Put "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": read tcp 172.31.21.130:44544->172.31.21.6:6443: read: connection reset by peer Feb 9 19:00:09.066403 kubelet[2182]: I0209 19:00:09.065967 2182 controller.go:116] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Feb 9 19:00:09.067767 kubelet[2182]: E0209 19:00:09.067678 2182 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": dial tcp 172.31.21.6:6443: connect: connection refused Feb 9 19:00:09.239822 kubelet[2182]: E0209 19:00:09.239761 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:09.269177 kubelet[2182]: E0209 19:00:09.269133 2182 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": dial tcp 172.31.21.6:6443: connect: connection refused Feb 9 19:00:09.346525 kubelet[2182]: E0209 19:00:09.346478 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:09.672479 kubelet[2182]: E0209 19:00:09.672102 2182 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": dial tcp 172.31.21.6:6443: connect: connection refused Feb 9 19:00:10.346666 kubelet[2182]: E0209 19:00:10.346600 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:10.474498 kubelet[2182]: E0209 19:00:10.474394 2182 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": dial tcp 172.31.21.6:6443: connect: connection refused Feb 9 19:00:11.346964 kubelet[2182]: E0209 19:00:11.346909 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:12.075970 kubelet[2182]: E0209 19:00:12.075924 2182 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": dial tcp 172.31.21.6:6443: connect: connection refused Feb 9 19:00:12.347198 kubelet[2182]: E0209 19:00:12.347087 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:13.347915 kubelet[2182]: E0209 19:00:13.347791 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:14.348734 kubelet[2182]: E0209 19:00:14.348668 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:15.349200 kubelet[2182]: E0209 19:00:15.349138 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:16.350338 kubelet[2182]: E0209 19:00:16.350298 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:17.351331 kubelet[2182]: E0209 19:00:17.351277 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:18.351860 kubelet[2182]: E0209 19:00:18.351811 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:19.352112 kubelet[2182]: E0209 19:00:19.352060 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:20.352404 kubelet[2182]: E0209 19:00:20.352346 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:21.353306 kubelet[2182]: E0209 19:00:21.353264 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:22.353623 kubelet[2182]: E0209 19:00:22.353568 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:23.354774 kubelet[2182]: E0209 19:00:23.354722 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:24.355875 kubelet[2182]: E0209 19:00:24.355834 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:25.276621 kubelet[2182]: E0209 19:00:25.276569 2182 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: Get "https://172.31.21.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.130?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 9 19:00:25.356877 kubelet[2182]: E0209 19:00:25.356816 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:25.583509 kubelet[2182]: E0209 19:00:25.583470 2182 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.21.130\": Get \"https://172.31.21.6:6443/api/v1/nodes/172.31.21.130?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 9 19:00:26.357645 kubelet[2182]: E0209 19:00:26.357593 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:27.358334 kubelet[2182]: E0209 19:00:27.358285 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:28.359470 kubelet[2182]: E0209 19:00:28.359422 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:29.240532 kubelet[2182]: E0209 19:00:29.240478 2182 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:29.359914 kubelet[2182]: E0209 19:00:29.359862 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:30.361091 kubelet[2182]: E0209 19:00:30.361031 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:31.362341 kubelet[2182]: E0209 19:00:31.362287 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:32.362704 kubelet[2182]: E0209 19:00:32.362653 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:33.363867 kubelet[2182]: E0209 19:00:33.363827 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:00:34.364471 kubelet[2182]: E0209 19:00:34.364395 2182 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"