Feb 9 18:53:29.119623 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 18:53:29.119657 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:53:29.119673 kernel: BIOS-provided physical RAM map: Feb 9 18:53:29.119685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 18:53:29.119696 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 18:53:29.119707 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 18:53:29.119723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 9 18:53:29.119735 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 9 18:53:29.119746 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 9 18:53:29.119758 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 18:53:29.119768 kernel: NX (Execute Disable) protection: active Feb 9 18:53:29.119781 kernel: SMBIOS 2.7 present. Feb 9 18:53:29.119792 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 9 18:53:29.119804 kernel: Hypervisor detected: KVM Feb 9 18:53:29.119884 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 18:53:29.119898 kernel: kvm-clock: cpu 0, msr 1ffaa001, primary cpu clock Feb 9 18:53:29.119911 kernel: kvm-clock: using sched offset of 6775926839 cycles Feb 9 18:53:29.119966 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 18:53:29.119982 kernel: tsc: Detected 2499.996 MHz processor Feb 9 18:53:29.119994 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 18:53:29.120043 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 18:53:29.120061 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 9 18:53:29.120075 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 18:53:29.120089 kernel: Using GB pages for direct mapping Feb 9 18:53:29.120136 kernel: ACPI: Early table checksum verification disabled Feb 9 18:53:29.120153 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 9 18:53:29.120168 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 9 18:53:29.120181 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 18:53:29.120229 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 9 18:53:29.120249 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 9 18:53:29.120263 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 18:53:29.120277 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 18:53:29.120328 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 9 18:53:29.120425 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 18:53:29.120441 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 9 18:53:29.120454 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 9 18:53:29.120467 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 18:53:29.120497 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 9 18:53:29.120508 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 9 18:53:29.120519 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 9 18:53:29.120536 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 9 18:53:29.120548 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 9 18:53:29.120560 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 9 18:53:29.120571 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 9 18:53:29.120587 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 9 18:53:29.120600 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 9 18:53:29.120612 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 9 18:53:29.120625 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 18:53:29.120636 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 18:53:29.120648 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 9 18:53:29.120660 kernel: NUMA: Initialized distance table, cnt=1 Feb 9 18:53:29.120671 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 9 18:53:29.120688 kernel: Zone ranges: Feb 9 18:53:29.120699 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 18:53:29.120712 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 9 18:53:29.120725 kernel: Normal empty Feb 9 18:53:29.120738 kernel: Movable zone start for each node Feb 9 18:53:29.120751 kernel: Early memory node ranges Feb 9 18:53:29.120764 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 18:53:29.120777 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 9 18:53:29.120791 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 9 18:53:29.120903 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 18:53:29.120917 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 18:53:29.120931 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 9 18:53:29.120944 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 18:53:29.120956 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 18:53:29.120968 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 9 18:53:29.120982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 18:53:29.120994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 18:53:29.121007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 18:53:29.121022 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 18:53:29.121035 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 18:53:29.121048 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 18:53:29.121061 kernel: TSC deadline timer available Feb 9 18:53:29.121138 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 18:53:29.121154 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 9 18:53:29.121168 kernel: Booting paravirtualized kernel on KVM Feb 9 18:53:29.121182 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 18:53:29.121195 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 18:53:29.121213 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 18:53:29.121226 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 18:53:29.121239 kernel: pcpu-alloc: [0] 0 1 Feb 9 18:53:29.121251 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 9 18:53:29.121263 kernel: kvm-guest: PV spinlocks enabled Feb 9 18:53:29.121275 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 18:53:29.121286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 9 18:53:29.121298 kernel: Policy zone: DMA32 Feb 9 18:53:29.121311 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:53:29.121327 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:53:29.121338 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:53:29.121350 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 18:53:29.121362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:53:29.121375 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 9 18:53:29.121387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:53:29.121399 kernel: Kernel/User page tables isolation: enabled Feb 9 18:53:29.121411 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 18:53:29.121426 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 18:53:29.121438 kernel: rcu: Hierarchical RCU implementation. Feb 9 18:53:29.121450 kernel: rcu: RCU event tracing is enabled. Feb 9 18:53:29.121462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:53:29.121475 kernel: Rude variant of Tasks RCU enabled. Feb 9 18:53:29.121514 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:53:29.121526 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:53:29.121538 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:53:29.121551 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 18:53:29.121568 kernel: random: crng init done Feb 9 18:53:29.121580 kernel: Console: colour VGA+ 80x25 Feb 9 18:53:29.121592 kernel: printk: console [ttyS0] enabled Feb 9 18:53:29.121604 kernel: ACPI: Core revision 20210730 Feb 9 18:53:29.121618 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 9 18:53:29.121632 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 18:53:29.121645 kernel: x2apic enabled Feb 9 18:53:29.121772 kernel: Switched APIC routing to physical x2apic. Feb 9 18:53:29.121791 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 9 18:53:29.121808 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 9 18:53:29.121820 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 18:53:29.121832 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 18:53:29.121844 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 18:53:29.121867 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 18:53:29.121883 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 18:53:29.121895 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 18:53:29.121909 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 18:53:29.121923 kernel: RETBleed: Vulnerable Feb 9 18:53:29.121935 kernel: Speculative Store Bypass: Vulnerable Feb 9 18:53:29.121948 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 18:53:29.121961 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 18:53:29.121974 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 18:53:29.121987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 18:53:29.122195 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 18:53:29.122211 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 18:53:29.122225 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 18:53:29.122238 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 18:53:29.122251 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 18:53:29.122267 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 18:53:29.122280 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 18:53:29.122293 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 9 18:53:29.122306 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 18:53:29.122319 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 18:53:29.122331 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 18:53:29.122345 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 9 18:53:29.122358 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 9 18:53:29.122371 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 9 18:53:29.122383 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 9 18:53:29.122396 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 9 18:53:29.122409 kernel: Freeing SMP alternatives memory: 32K Feb 9 18:53:29.122425 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:53:29.122438 kernel: LSM: Security Framework initializing Feb 9 18:53:29.122451 kernel: SELinux: Initializing. Feb 9 18:53:29.122464 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 18:53:29.122477 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 18:53:29.122507 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 18:53:29.122521 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 18:53:29.122534 kernel: signal: max sigframe size: 3632 Feb 9 18:53:29.122548 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:53:29.122561 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 18:53:29.122576 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:53:29.122589 kernel: x86: Booting SMP configuration: Feb 9 18:53:29.122603 kernel: .... node #0, CPUs: #1 Feb 9 18:53:29.122616 kernel: kvm-clock: cpu 1, msr 1ffaa041, secondary cpu clock Feb 9 18:53:29.122629 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 9 18:53:29.122643 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 18:53:29.122658 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 18:53:29.122670 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:53:29.122683 kernel: smpboot: Max logical packages: 1 Feb 9 18:53:29.122699 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 9 18:53:29.122712 kernel: devtmpfs: initialized Feb 9 18:53:29.122725 kernel: x86/mm: Memory block size: 128MB Feb 9 18:53:29.122739 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:53:29.122753 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:53:29.122766 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:53:29.122779 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:53:29.122792 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:53:29.122806 kernel: audit: type=2000 audit(1707504807.625:1): state=initialized audit_enabled=0 res=1 Feb 9 18:53:29.122821 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:53:29.122833 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 18:53:29.122846 kernel: cpuidle: using governor menu Feb 9 18:53:29.122859 kernel: ACPI: bus type PCI registered Feb 9 18:53:29.122872 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:53:29.122885 kernel: dca service started, version 1.12.1 Feb 9 18:53:29.122899 kernel: PCI: Using configuration type 1 for base access Feb 9 18:53:29.122912 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 18:53:29.122925 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:53:29.122941 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:53:29.122954 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:53:29.122968 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:53:29.122980 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:53:29.122994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:53:29.123007 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:53:29.123020 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:53:29.123033 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:53:29.123046 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 18:53:29.123061 kernel: ACPI: Interpreter enabled Feb 9 18:53:29.123074 kernel: ACPI: PM: (supports S0 S5) Feb 9 18:53:29.123087 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 18:53:29.123100 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 18:53:29.123113 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 18:53:29.123127 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:53:29.123313 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:53:29.123441 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 18:53:29.123464 kernel: acpiphp: Slot [3] registered Feb 9 18:53:29.123479 kernel: acpiphp: Slot [4] registered Feb 9 18:53:29.123515 kernel: acpiphp: Slot [5] registered Feb 9 18:53:29.123530 kernel: acpiphp: Slot [6] registered Feb 9 18:53:29.123545 kernel: acpiphp: Slot [7] registered Feb 9 18:53:29.123560 kernel: acpiphp: Slot [8] registered Feb 9 18:53:29.123574 kernel: acpiphp: Slot [9] registered Feb 9 18:53:29.123589 kernel: acpiphp: Slot [10] registered Feb 9 18:53:29.123604 kernel: acpiphp: Slot [11] registered Feb 9 18:53:29.123621 kernel: acpiphp: Slot [12] registered Feb 9 18:53:29.123636 kernel: acpiphp: Slot [13] registered Feb 9 18:53:29.123651 kernel: acpiphp: Slot [14] registered Feb 9 18:53:29.123665 kernel: acpiphp: Slot [15] registered Feb 9 18:53:29.123680 kernel: acpiphp: Slot [16] registered Feb 9 18:53:29.123695 kernel: acpiphp: Slot [17] registered Feb 9 18:53:29.123710 kernel: acpiphp: Slot [18] registered Feb 9 18:53:29.123725 kernel: acpiphp: Slot [19] registered Feb 9 18:53:29.123740 kernel: acpiphp: Slot [20] registered Feb 9 18:53:29.123758 kernel: acpiphp: Slot [21] registered Feb 9 18:53:29.123772 kernel: acpiphp: Slot [22] registered Feb 9 18:53:29.123787 kernel: acpiphp: Slot [23] registered Feb 9 18:53:29.123803 kernel: acpiphp: Slot [24] registered Feb 9 18:53:29.123818 kernel: acpiphp: Slot [25] registered Feb 9 18:53:29.123833 kernel: acpiphp: Slot [26] registered Feb 9 18:53:29.123847 kernel: acpiphp: Slot [27] registered Feb 9 18:53:29.123862 kernel: acpiphp: Slot [28] registered Feb 9 18:53:29.123876 kernel: acpiphp: Slot [29] registered Feb 9 18:53:29.123891 kernel: acpiphp: Slot [30] registered Feb 9 18:53:29.123910 kernel: acpiphp: Slot [31] registered Feb 9 18:53:29.123924 kernel: PCI host bridge to bus 0000:00 Feb 9 18:53:29.124055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 18:53:29.124169 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 18:53:29.124280 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 18:53:29.124400 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 18:53:29.124525 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:53:29.124674 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 18:53:29.124807 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 18:53:29.124942 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 9 18:53:29.125274 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 18:53:29.125428 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 18:53:29.125566 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 9 18:53:29.125722 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 9 18:53:29.125849 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 9 18:53:29.125970 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 9 18:53:29.126162 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 9 18:53:29.126289 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 9 18:53:29.126410 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 10742 usecs Feb 9 18:53:29.126551 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 9 18:53:29.126721 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 9 18:53:29.126905 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 9 18:53:29.127105 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 18:53:29.127238 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 18:53:29.127362 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 9 18:53:29.127501 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 18:53:29.127626 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 9 18:53:29.127648 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 18:53:29.127664 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 18:53:29.127678 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 18:53:29.127693 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 18:53:29.127707 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 18:53:29.127722 kernel: iommu: Default domain type: Translated Feb 9 18:53:29.127737 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 18:53:29.127856 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 9 18:53:29.127976 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 18:53:29.128101 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 9 18:53:29.128119 kernel: vgaarb: loaded Feb 9 18:53:29.128134 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:53:29.128149 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:53:29.128163 kernel: PTP clock support registered Feb 9 18:53:29.128178 kernel: PCI: Using ACPI for IRQ routing Feb 9 18:53:29.128192 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 18:53:29.128207 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 18:53:29.128224 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 9 18:53:29.128238 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 18:53:29.128253 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 9 18:53:29.128268 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 18:53:29.128282 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:53:29.128296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:53:29.128311 kernel: pnp: PnP ACPI init Feb 9 18:53:29.128325 kernel: pnp: PnP ACPI: found 5 devices Feb 9 18:53:29.128340 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 18:53:29.128357 kernel: NET: Registered PF_INET protocol family Feb 9 18:53:29.128371 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:53:29.128392 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 18:53:29.128407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:53:29.128422 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 18:53:29.128437 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 18:53:29.128455 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 18:53:29.128547 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 18:53:29.128562 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 18:53:29.128580 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:53:29.128594 kernel: NET: Registered PF_XDP protocol family Feb 9 18:53:29.128719 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 18:53:29.128829 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 18:53:29.128931 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 18:53:29.130426 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 18:53:29.130595 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 18:53:29.130742 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 18:53:29.130768 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:53:29.130785 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 18:53:29.130801 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 9 18:53:29.130817 kernel: clocksource: Switched to clocksource tsc Feb 9 18:53:29.130833 kernel: Initialise system trusted keyrings Feb 9 18:53:29.130848 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 18:53:29.130863 kernel: Key type asymmetric registered Feb 9 18:53:29.130877 kernel: Asymmetric key parser 'x509' registered Feb 9 18:53:29.130895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:53:29.130911 kernel: io scheduler mq-deadline registered Feb 9 18:53:29.131014 kernel: io scheduler kyber registered Feb 9 18:53:29.131030 kernel: io scheduler bfq registered Feb 9 18:53:29.131046 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 18:53:29.131062 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:53:29.131077 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 18:53:29.131093 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 18:53:29.131108 kernel: i8042: Warning: Keylock active Feb 9 18:53:29.131128 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 18:53:29.131144 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 18:53:29.131303 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 18:53:29.131432 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 18:53:29.131574 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T18:53:28 UTC (1707504808) Feb 9 18:53:29.131745 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 18:53:29.131766 kernel: intel_pstate: CPU model not supported Feb 9 18:53:29.131781 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:53:29.131800 kernel: Segment Routing with IPv6 Feb 9 18:53:29.131814 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:53:29.131827 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:53:29.131840 kernel: Key type dns_resolver registered Feb 9 18:53:29.131853 kernel: IPI shorthand broadcast: enabled Feb 9 18:53:29.131867 kernel: sched_clock: Marking stable (440691731, 257923326)->(823415416, -124800359) Feb 9 18:53:29.131930 kernel: registered taskstats version 1 Feb 9 18:53:29.131946 kernel: Loading compiled-in X.509 certificates Feb 9 18:53:29.131961 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 18:53:29.132015 kernel: Key type .fscrypt registered Feb 9 18:53:29.132032 kernel: Key type fscrypt-provisioning registered Feb 9 18:53:29.132048 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:53:29.132062 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:53:29.132116 kernel: ima: No architecture policies found Feb 9 18:53:29.132133 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 18:53:29.132149 kernel: Write protecting the kernel read-only data: 28672k Feb 9 18:53:29.132201 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 18:53:29.132220 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 18:53:29.132240 kernel: Run /init as init process Feb 9 18:53:29.132300 kernel: with arguments: Feb 9 18:53:29.132317 kernel: /init Feb 9 18:53:29.132367 kernel: with environment: Feb 9 18:53:29.132395 kernel: HOME=/ Feb 9 18:53:29.132409 kernel: TERM=linux Feb 9 18:53:29.132459 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:53:29.132493 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:53:29.132550 systemd[1]: Detected virtualization amazon. Feb 9 18:53:29.132570 systemd[1]: Detected architecture x86-64. Feb 9 18:53:29.132587 systemd[1]: Running in initrd. Feb 9 18:53:29.132635 systemd[1]: No hostname configured, using default hostname. Feb 9 18:53:29.132666 systemd[1]: Hostname set to . Feb 9 18:53:29.132721 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:53:29.132739 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 18:53:29.132757 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:53:29.132845 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:53:29.132901 systemd[1]: Reached target cryptsetup.target. Feb 9 18:53:29.132918 systemd[1]: Reached target paths.target. Feb 9 18:53:29.132932 systemd[1]: Reached target slices.target. Feb 9 18:53:29.132946 systemd[1]: Reached target swap.target. Feb 9 18:53:29.132960 systemd[1]: Reached target timers.target. Feb 9 18:53:29.132979 systemd[1]: Listening on iscsid.socket. Feb 9 18:53:29.132994 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:53:29.133010 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:53:29.133024 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:53:29.133040 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:53:29.133055 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:53:29.133132 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:53:29.133153 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:53:29.133169 systemd[1]: Reached target sockets.target. Feb 9 18:53:29.133185 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:53:29.133200 systemd[1]: Finished network-cleanup.service. Feb 9 18:53:29.133215 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:53:29.133230 systemd[1]: Starting systemd-journald.service... Feb 9 18:53:29.133245 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:53:29.133259 systemd[1]: Starting systemd-resolved.service... Feb 9 18:53:29.133274 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:53:29.133298 systemd-journald[184]: Journal started Feb 9 18:53:29.133375 systemd-journald[184]: Runtime Journal (/run/log/journal/ec280cd2a191bb4e971b027337e4afc4) is 4.8M, max 38.7M, 33.9M free. Feb 9 18:53:29.136498 systemd[1]: Started systemd-journald.service. Feb 9 18:53:29.139981 systemd-modules-load[185]: Inserted module 'overlay' Feb 9 18:53:29.288952 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:53:29.288989 kernel: Bridge firewalling registered Feb 9 18:53:29.289008 kernel: SCSI subsystem initialized Feb 9 18:53:29.289023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:53:29.289045 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:53:29.289065 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:53:29.175097 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 9 18:53:29.230621 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 9 18:53:29.252608 systemd-resolved[186]: Positive Trust Anchors: Feb 9 18:53:29.252619 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:53:29.252670 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:53:29.309097 kernel: audit: type=1130 audit(1707504809.292:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.256404 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 9 18:53:29.309238 systemd[1]: Started systemd-resolved.service. Feb 9 18:53:29.318266 kernel: audit: type=1130 audit(1707504809.309:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.318749 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:53:29.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.321077 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:53:29.326501 kernel: audit: type=1130 audit(1707504809.319:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.327445 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:53:29.334945 kernel: audit: type=1130 audit(1707504809.325:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.334975 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:53:29.342712 kernel: audit: type=1130 audit(1707504809.333:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.342745 kernel: audit: type=1130 audit(1707504809.339:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.341603 systemd[1]: Reached target nss-lookup.target. Feb 9 18:53:29.355459 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:53:29.358031 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:53:29.358778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:53:29.367372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:53:29.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.374503 kernel: audit: type=1130 audit(1707504809.368:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.375856 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:53:29.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.382028 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:53:29.384102 kernel: audit: type=1130 audit(1707504809.376:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.385166 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:53:29.391274 kernel: audit: type=1130 audit(1707504809.382:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.397717 dracut-cmdline[206]: dracut-dracut-053 Feb 9 18:53:29.401028 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 18:53:29.481502 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:53:29.495507 kernel: iscsi: registered transport (tcp) Feb 9 18:53:29.520506 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:53:29.520572 kernel: QLogic iSCSI HBA Driver Feb 9 18:53:29.561984 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:53:29.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:29.564849 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:53:29.636575 kernel: raid6: avx512x4 gen() 11737 MB/s Feb 9 18:53:29.654634 kernel: raid6: avx512x4 xor() 3740 MB/s Feb 9 18:53:29.674531 kernel: raid6: avx512x2 gen() 12952 MB/s Feb 9 18:53:29.691865 kernel: raid6: avx512x2 xor() 17593 MB/s Feb 9 18:53:29.709634 kernel: raid6: avx512x1 gen() 15493 MB/s Feb 9 18:53:29.727537 kernel: raid6: avx512x1 xor() 14887 MB/s Feb 9 18:53:29.744522 kernel: raid6: avx2x4 gen() 13671 MB/s Feb 9 18:53:29.762537 kernel: raid6: avx2x4 xor() 6497 MB/s Feb 9 18:53:29.779532 kernel: raid6: avx2x2 gen() 14627 MB/s Feb 9 18:53:29.797535 kernel: raid6: avx2x2 xor() 14677 MB/s Feb 9 18:53:29.814534 kernel: raid6: avx2x1 gen() 10350 MB/s Feb 9 18:53:29.831677 kernel: raid6: avx2x1 xor() 12341 MB/s Feb 9 18:53:29.849514 kernel: raid6: sse2x4 gen() 6953 MB/s Feb 9 18:53:29.867527 kernel: raid6: sse2x4 xor() 4869 MB/s Feb 9 18:53:29.885519 kernel: raid6: sse2x2 gen() 7516 MB/s Feb 9 18:53:29.903517 kernel: raid6: sse2x2 xor() 4469 MB/s Feb 9 18:53:29.924775 kernel: raid6: sse2x1 gen() 6058 MB/s Feb 9 18:53:29.946386 kernel: raid6: sse2x1 xor() 1746 MB/s Feb 9 18:53:29.946497 kernel: raid6: using algorithm avx512x1 gen() 15493 MB/s Feb 9 18:53:29.946529 kernel: raid6: .... xor() 14887 MB/s, rmw enabled Feb 9 18:53:29.948698 kernel: raid6: using avx512x2 recovery algorithm Feb 9 18:53:29.975514 kernel: xor: automatically using best checksumming function avx Feb 9 18:53:30.116565 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 18:53:30.127804 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:53:30.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:30.128000 audit: BPF prog-id=7 op=LOAD Feb 9 18:53:30.128000 audit: BPF prog-id=8 op=LOAD Feb 9 18:53:30.130637 systemd[1]: Starting systemd-udevd.service... Feb 9 18:53:30.164512 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 18:53:30.173285 systemd[1]: Started systemd-udevd.service. Feb 9 18:53:30.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:30.178332 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:53:30.199878 dracut-pre-trigger[387]: rd.md=0: removing MD RAID activation Feb 9 18:53:30.253490 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:53:30.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:30.256873 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:53:30.331971 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:53:30.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:30.405954 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:53:30.429629 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 18:53:30.431509 kernel: AES CTR mode by8 optimization enabled Feb 9 18:53:30.460504 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 18:53:30.460844 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 18:53:30.462503 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 9 18:53:30.468111 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:32:87:37:6a:87 Feb 9 18:53:30.467409 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:53:30.660592 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 18:53:30.660917 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 18:53:30.660941 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 18:53:30.661118 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:53:30.661138 kernel: GPT:9289727 != 16777215 Feb 9 18:53:30.661156 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:53:30.661233 kernel: GPT:9289727 != 16777215 Feb 9 18:53:30.661257 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:53:30.661275 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:53:30.661293 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435) Feb 9 18:53:30.583503 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:53:30.672673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:53:30.683984 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:53:30.695436 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:53:30.698224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:53:30.707760 systemd[1]: Starting disk-uuid.service... Feb 9 18:53:30.716633 disk-uuid[593]: Primary Header is updated. Feb 9 18:53:30.716633 disk-uuid[593]: Secondary Entries is updated. Feb 9 18:53:30.716633 disk-uuid[593]: Secondary Header is updated. Feb 9 18:53:30.721544 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:53:30.730502 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:53:31.737732 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 18:53:31.737805 disk-uuid[594]: The operation has completed successfully. Feb 9 18:53:31.876053 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:53:31.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:31.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:31.876163 systemd[1]: Finished disk-uuid.service. Feb 9 18:53:31.891436 systemd[1]: Starting verity-setup.service... Feb 9 18:53:31.912221 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 18:53:32.002979 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:53:32.005330 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:53:32.013832 systemd[1]: Finished verity-setup.service. Feb 9 18:53:32.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.123506 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:53:32.124103 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:53:32.125964 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:53:32.129039 systemd[1]: Starting ignition-setup.service... Feb 9 18:53:32.132228 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:53:32.151537 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:53:32.151588 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:53:32.151601 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:53:32.161514 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:53:32.183575 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:53:32.219725 systemd[1]: Finished ignition-setup.service. Feb 9 18:53:32.222686 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:53:32.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.251214 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:53:32.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.253000 audit: BPF prog-id=9 op=LOAD Feb 9 18:53:32.255347 systemd[1]: Starting systemd-networkd.service... Feb 9 18:53:32.285053 systemd-networkd[1022]: lo: Link UP Feb 9 18:53:32.285402 systemd-networkd[1022]: lo: Gained carrier Feb 9 18:53:32.288423 systemd-networkd[1022]: Enumeration completed Feb 9 18:53:32.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.288562 systemd[1]: Started systemd-networkd.service. Feb 9 18:53:32.289781 systemd[1]: Reached target network.target. Feb 9 18:53:32.290959 systemd-networkd[1022]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:53:32.292260 systemd[1]: Starting iscsiuio.service... Feb 9 18:53:32.303372 systemd[1]: Started iscsiuio.service. Feb 9 18:53:32.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.306359 systemd[1]: Starting iscsid.service... Feb 9 18:53:32.307260 systemd-networkd[1022]: eth0: Link UP Feb 9 18:53:32.307265 systemd-networkd[1022]: eth0: Gained carrier Feb 9 18:53:32.314335 iscsid[1027]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:53:32.314335 iscsid[1027]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:53:32.314335 iscsid[1027]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:53:32.314335 iscsid[1027]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:53:32.326326 iscsid[1027]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:53:32.326326 iscsid[1027]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:53:32.329626 systemd[1]: Started iscsid.service. Feb 9 18:53:32.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.332218 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:53:32.340679 systemd-networkd[1022]: eth0: DHCPv4 address 172.31.24.123/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 18:53:32.350770 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:53:32.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.351999 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:53:32.353081 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:53:32.354392 systemd[1]: Reached target remote-fs.target. Feb 9 18:53:32.361117 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:53:32.375069 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:53:32.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.852744 ignition[992]: Ignition 2.14.0 Feb 9 18:53:32.852758 ignition[992]: Stage: fetch-offline Feb 9 18:53:32.852898 ignition[992]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:32.852954 ignition[992]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:32.868606 ignition[992]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:32.870276 ignition[992]: Ignition finished successfully Feb 9 18:53:32.872092 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:53:32.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.874257 systemd[1]: Starting ignition-fetch.service... Feb 9 18:53:32.884321 ignition[1046]: Ignition 2.14.0 Feb 9 18:53:32.884334 ignition[1046]: Stage: fetch Feb 9 18:53:32.885641 ignition[1046]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:32.885680 ignition[1046]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:32.899287 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:32.900685 ignition[1046]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:32.908510 ignition[1046]: INFO : PUT result: OK Feb 9 18:53:32.910901 ignition[1046]: DEBUG : parsed url from cmdline: "" Feb 9 18:53:32.910901 ignition[1046]: INFO : no config URL provided Feb 9 18:53:32.910901 ignition[1046]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:53:32.915004 ignition[1046]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 18:53:32.915004 ignition[1046]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:32.915004 ignition[1046]: INFO : PUT result: OK Feb 9 18:53:32.915004 ignition[1046]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 18:53:32.921704 ignition[1046]: INFO : GET result: OK Feb 9 18:53:32.921704 ignition[1046]: DEBUG : parsing config with SHA512: 73047147b4ade3dcb84d335a106e09912091794a835dba8447638490a1cf17277aaae2412cfc83a676fa717bce78029fd4936600a53f5545105806cbfad42d57 Feb 9 18:53:32.965619 unknown[1046]: fetched base config from "system" Feb 9 18:53:32.965820 unknown[1046]: fetched base config from "system" Feb 9 18:53:32.965829 unknown[1046]: fetched user config from "aws" Feb 9 18:53:32.969176 ignition[1046]: fetch: fetch complete Feb 9 18:53:32.969196 ignition[1046]: fetch: fetch passed Feb 9 18:53:32.969256 ignition[1046]: Ignition finished successfully Feb 9 18:53:32.973207 systemd[1]: Finished ignition-fetch.service. Feb 9 18:53:32.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:32.978615 systemd[1]: Starting ignition-kargs.service... Feb 9 18:53:32.997203 ignition[1052]: Ignition 2.14.0 Feb 9 18:53:32.997217 ignition[1052]: Stage: kargs Feb 9 18:53:32.997702 ignition[1052]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:32.997908 ignition[1052]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:33.008360 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:33.010123 ignition[1052]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:33.012436 ignition[1052]: INFO : PUT result: OK Feb 9 18:53:33.016658 ignition[1052]: kargs: kargs passed Feb 9 18:53:33.017726 ignition[1052]: Ignition finished successfully Feb 9 18:53:33.019777 systemd[1]: Finished ignition-kargs.service. Feb 9 18:53:33.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.020801 systemd[1]: Starting ignition-disks.service... Feb 9 18:53:33.032156 ignition[1058]: Ignition 2.14.0 Feb 9 18:53:33.032166 ignition[1058]: Stage: disks Feb 9 18:53:33.032306 ignition[1058]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:33.032333 ignition[1058]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:33.040237 ignition[1058]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:33.041885 ignition[1058]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:33.045091 ignition[1058]: INFO : PUT result: OK Feb 9 18:53:33.048205 ignition[1058]: disks: disks passed Feb 9 18:53:33.048278 ignition[1058]: Ignition finished successfully Feb 9 18:53:33.050686 systemd[1]: Finished ignition-disks.service. Feb 9 18:53:33.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.050918 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:53:33.053912 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:53:33.054879 systemd[1]: Reached target local-fs.target. Feb 9 18:53:33.058598 systemd[1]: Reached target sysinit.target. Feb 9 18:53:33.060179 systemd[1]: Reached target basic.target. Feb 9 18:53:33.063399 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:53:33.095948 systemd-fsck[1066]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 18:53:33.101587 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:53:33.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.104504 systemd[1]: Mounting sysroot.mount... Feb 9 18:53:33.121504 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:53:33.124283 systemd[1]: Mounted sysroot.mount. Feb 9 18:53:33.133933 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:53:33.155351 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:53:33.158434 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:53:33.158523 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:53:33.158560 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:53:33.163677 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:53:33.183490 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:53:33.187330 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:53:33.201515 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1083) Feb 9 18:53:33.205430 initrd-setup-root[1088]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:53:33.209166 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:53:33.209190 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:53:33.209201 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:53:33.215506 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:53:33.218437 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:53:33.225879 initrd-setup-root[1114]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:53:33.233199 initrd-setup-root[1122]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:53:33.239014 initrd-setup-root[1130]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:53:33.414662 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:53:33.423036 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:53:33.423067 kernel: audit: type=1130 audit(1707504813.413:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.416053 systemd[1]: Starting ignition-mount.service... Feb 9 18:53:33.428792 systemd[1]: Starting sysroot-boot.service... Feb 9 18:53:33.435833 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:53:33.436112 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:53:33.472037 ignition[1149]: INFO : Ignition 2.14.0 Feb 9 18:53:33.474164 ignition[1149]: INFO : Stage: mount Feb 9 18:53:33.474164 ignition[1149]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:33.474164 ignition[1149]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:33.482608 systemd[1]: Finished sysroot-boot.service. Feb 9 18:53:33.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.487238 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:33.490834 kernel: audit: type=1130 audit(1707504813.483:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.490862 ignition[1149]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:33.492577 ignition[1149]: INFO : PUT result: OK Feb 9 18:53:33.497268 ignition[1149]: INFO : mount: mount passed Feb 9 18:53:33.498278 ignition[1149]: INFO : Ignition finished successfully Feb 9 18:53:33.500547 systemd[1]: Finished ignition-mount.service. Feb 9 18:53:33.501896 systemd[1]: Starting ignition-files.service... Feb 9 18:53:33.509075 kernel: audit: type=1130 audit(1707504813.499:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:33.512583 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:53:33.527504 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1158) Feb 9 18:53:33.528143 systemd-networkd[1022]: eth0: Gained IPv6LL Feb 9 18:53:33.532689 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 18:53:33.532741 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 18:53:33.532760 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 18:53:33.540507 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 18:53:33.543158 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:53:33.586712 ignition[1177]: INFO : Ignition 2.14.0 Feb 9 18:53:33.586712 ignition[1177]: INFO : Stage: files Feb 9 18:53:33.591227 ignition[1177]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:33.591227 ignition[1177]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:33.615736 ignition[1177]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:33.617478 ignition[1177]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:33.624567 ignition[1177]: INFO : PUT result: OK Feb 9 18:53:33.628732 ignition[1177]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:53:33.634044 ignition[1177]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:53:33.634044 ignition[1177]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:53:33.668285 ignition[1177]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:53:33.670248 ignition[1177]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:53:33.674421 unknown[1177]: wrote ssh authorized keys file for user: core Feb 9 18:53:33.678532 ignition[1177]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:53:33.681253 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 18:53:33.684503 ignition[1177]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 18:53:34.162088 ignition[1177]: INFO : GET result: OK Feb 9 18:53:34.426281 ignition[1177]: DEBUG : file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 18:53:34.429403 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 18:53:34.429403 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 18:53:34.429403 ignition[1177]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 18:53:34.503680 ignition[1177]: INFO : GET result: OK Feb 9 18:53:34.622633 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 18:53:34.625235 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 18:53:34.625235 ignition[1177]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 18:53:35.102520 ignition[1177]: INFO : GET result: OK Feb 9 18:53:35.222912 ignition[1177]: DEBUG : file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 18:53:35.226119 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 18:53:35.226119 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:53:35.226119 ignition[1177]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 9 18:53:35.322613 ignition[1177]: INFO : GET result: OK Feb 9 18:53:35.839393 ignition[1177]: DEBUG : file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 9 18:53:35.843306 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:53:35.843306 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:53:35.843306 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:53:35.843306 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 18:53:35.843306 ignition[1177]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:53:35.858422 ignition[1177]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3506967435" Feb 9 18:53:35.860316 ignition[1177]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3506967435": device or resource busy Feb 9 18:53:35.860316 ignition[1177]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3506967435", trying btrfs: device or resource busy Feb 9 18:53:35.860316 ignition[1177]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3506967435" Feb 9 18:53:35.869475 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1182) Feb 9 18:53:35.871742 ignition[1177]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3506967435" Feb 9 18:53:35.871742 ignition[1177]: INFO : op(3): [started] unmounting "/mnt/oem3506967435" Feb 9 18:53:35.871742 ignition[1177]: INFO : op(3): [finished] unmounting "/mnt/oem3506967435" Feb 9 18:53:35.877516 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 18:53:35.877516 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:53:35.877516 ignition[1177]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 18:53:35.875660 systemd[1]: mnt-oem3506967435.mount: Deactivated successfully. Feb 9 18:53:36.068709 ignition[1177]: INFO : GET result: OK Feb 9 18:53:42.291699 ignition[1177]: DEBUG : file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 18:53:42.295471 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:53:42.295471 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:53:42.295471 ignition[1177]: INFO : GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 18:53:42.571452 ignition[1177]: INFO : GET result: OK Feb 9 18:53:57.068051 ignition[1177]: DEBUG : file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:53:57.071027 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:53:57.071027 ignition[1177]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 18:53:57.532958 ignition[1177]: INFO : GET result: OK Feb 9 18:53:57.944861 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:53:57.944861 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:53:57.949586 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:53:57.949586 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:53:57.949586 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:53:57.949586 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:53:57.949586 ignition[1177]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:53:57.974004 ignition[1177]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368901520" Feb 9 18:53:57.976782 ignition[1177]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368901520": device or resource busy Feb 9 18:53:57.976782 ignition[1177]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1368901520", trying btrfs: device or resource busy Feb 9 18:53:57.976782 ignition[1177]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368901520" Feb 9 18:53:57.982652 ignition[1177]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368901520" Feb 9 18:53:57.982652 ignition[1177]: INFO : op(6): [started] unmounting "/mnt/oem1368901520" Feb 9 18:53:57.982652 ignition[1177]: INFO : op(6): [finished] unmounting "/mnt/oem1368901520" Feb 9 18:53:57.982652 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:53:57.982652 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 18:53:57.982652 ignition[1177]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:53:57.998207 systemd[1]: mnt-oem1368901520.mount: Deactivated successfully. Feb 9 18:53:58.009118 ignition[1177]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem322256479" Feb 9 18:53:58.011045 ignition[1177]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem322256479": device or resource busy Feb 9 18:53:58.011045 ignition[1177]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem322256479", trying btrfs: device or resource busy Feb 9 18:53:58.011045 ignition[1177]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem322256479" Feb 9 18:53:58.011045 ignition[1177]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem322256479" Feb 9 18:53:58.011045 ignition[1177]: INFO : op(9): [started] unmounting "/mnt/oem322256479" Feb 9 18:53:58.011045 ignition[1177]: INFO : op(9): [finished] unmounting "/mnt/oem322256479" Feb 9 18:53:58.011045 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 18:53:58.022775 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 18:53:58.022775 ignition[1177]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:53:58.037996 ignition[1177]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2637657049" Feb 9 18:53:58.037996 ignition[1177]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2637657049": device or resource busy Feb 9 18:53:58.037996 ignition[1177]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2637657049", trying btrfs: device or resource busy Feb 9 18:53:58.037996 ignition[1177]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2637657049" Feb 9 18:53:58.048426 ignition[1177]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2637657049" Feb 9 18:53:58.048426 ignition[1177]: INFO : op(c): [started] unmounting "/mnt/oem2637657049" Feb 9 18:53:58.048426 ignition[1177]: INFO : op(c): [finished] unmounting "/mnt/oem2637657049" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(15): [started] processing unit "amazon-ssm-agent.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(15): op(16): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(15): [finished] processing unit "amazon-ssm-agent.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(17): [started] processing unit "nvidia.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(17): [finished] processing unit "nvidia.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(18): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(18): op(19): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(18): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:53:58.048426 ignition[1177]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1e): [started] setting preset to enabled for "nvidia.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1e): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(22): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(22): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(23): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 18:53:58.086316 ignition[1177]: INFO : files: op(23): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 18:53:58.120892 ignition[1177]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:53:58.120892 ignition[1177]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:53:58.120892 ignition[1177]: INFO : files: files passed Feb 9 18:53:58.120892 ignition[1177]: INFO : Ignition finished successfully Feb 9 18:53:58.129549 systemd[1]: Finished ignition-files.service. Feb 9 18:53:58.135662 kernel: audit: type=1130 audit(1707504838.129:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.140577 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:53:58.144821 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:53:58.145752 systemd[1]: Starting ignition-quench.service... Feb 9 18:53:58.153086 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:53:58.155308 systemd[1]: Finished ignition-quench.service. Feb 9 18:53:58.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.163454 initrd-setup-root-after-ignition[1202]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:53:58.174668 kernel: audit: type=1130 audit(1707504838.157:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.174701 kernel: audit: type=1131 audit(1707504838.157:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.164463 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:53:58.167744 systemd[1]: Reached target ignition-complete.target. Feb 9 18:53:58.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.182518 kernel: audit: type=1130 audit(1707504838.166:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.169123 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:53:58.195956 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:53:58.196086 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:53:58.218232 kernel: audit: type=1130 audit(1707504838.197:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.218318 kernel: audit: type=1131 audit(1707504838.197:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.199995 systemd[1]: Reached target initrd-fs.target. Feb 9 18:53:58.213762 systemd[1]: Reached target initrd.target. Feb 9 18:53:58.214050 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:53:58.215852 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:53:58.239518 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:53:58.249556 kernel: audit: type=1130 audit(1707504838.238:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.240808 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:53:58.258041 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:53:58.258256 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:53:58.263264 systemd[1]: Stopped target timers.target. Feb 9 18:53:58.265393 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:53:58.266763 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:53:58.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.268910 systemd[1]: Stopped target initrd.target. Feb 9 18:53:58.275616 kernel: audit: type=1131 audit(1707504838.267:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.275527 systemd[1]: Stopped target basic.target. Feb 9 18:53:58.276680 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:53:58.280376 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:53:58.282800 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:53:58.285078 systemd[1]: Stopped target remote-fs.target. Feb 9 18:53:58.286450 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:53:58.290070 systemd[1]: Stopped target sysinit.target. Feb 9 18:53:58.291878 systemd[1]: Stopped target local-fs.target. Feb 9 18:53:58.293690 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:53:58.295757 systemd[1]: Stopped target swap.target. Feb 9 18:53:58.297411 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:53:58.298679 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:53:58.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.300984 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:53:58.310668 kernel: audit: type=1131 audit(1707504838.299:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.310505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:53:58.320097 kernel: audit: type=1131 audit(1707504838.310:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.310628 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:53:58.312037 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:53:58.312148 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:53:58.316827 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:53:58.316922 systemd[1]: Stopped ignition-files.service. Feb 9 18:53:58.321812 systemd[1]: Stopping ignition-mount.service... Feb 9 18:53:58.329736 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:53:58.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.335134 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:53:58.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.335935 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:53:58.341805 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:53:58.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.342010 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:53:58.347523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:53:58.349386 ignition[1215]: INFO : Ignition 2.14.0 Feb 9 18:53:58.349386 ignition[1215]: INFO : Stage: umount Feb 9 18:53:58.349386 ignition[1215]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:53:58.349386 ignition[1215]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 18:53:58.347633 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:53:58.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.363100 ignition[1215]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 18:53:58.364759 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 18:53:58.366564 ignition[1215]: INFO : PUT result: OK Feb 9 18:53:58.371582 ignition[1215]: INFO : umount: umount passed Feb 9 18:53:58.372668 ignition[1215]: INFO : Ignition finished successfully Feb 9 18:53:58.373679 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:53:58.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.373794 systemd[1]: Stopped ignition-mount.service. Feb 9 18:53:58.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.373979 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:53:58.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.374031 systemd[1]: Stopped ignition-disks.service. Feb 9 18:53:58.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.376848 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:53:58.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.377011 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:53:58.378799 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:53:58.378859 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:53:58.380944 systemd[1]: Stopped target network.target. Feb 9 18:53:58.382549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:53:58.382604 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:53:58.384869 systemd[1]: Stopped target paths.target. Feb 9 18:53:58.389607 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:53:58.393525 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:53:58.397332 systemd[1]: Stopped target slices.target. Feb 9 18:53:58.399471 systemd[1]: Stopped target sockets.target. Feb 9 18:53:58.400608 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:53:58.400639 systemd[1]: Closed iscsid.socket. Feb 9 18:53:58.405326 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:53:58.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.406857 systemd[1]: Closed iscsiuio.socket. Feb 9 18:53:58.409910 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:53:58.411354 systemd[1]: Stopped ignition-setup.service. Feb 9 18:53:58.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.415279 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:53:58.418448 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:53:58.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.420122 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:53:58.420199 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:53:58.421666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:53:58.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.421705 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:53:58.422530 systemd-networkd[1022]: eth0: DHCPv6 lease lost Feb 9 18:53:58.434000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:53:58.434000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:53:58.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.427092 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:53:58.427183 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:53:58.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.428941 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:53:58.430992 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:53:58.434678 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:53:58.434711 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:53:58.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.436829 systemd[1]: Stopping network-cleanup.service... Feb 9 18:53:58.437715 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:53:58.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.437770 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:53:58.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.438919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:53:58.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.438961 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:53:58.440098 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:53:58.440136 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:53:58.443721 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:53:58.449085 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:53:58.449229 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:53:58.451048 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:53:58.451134 systemd[1]: Stopped network-cleanup.service. Feb 9 18:53:58.452510 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:53:58.452544 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:53:58.454439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:53:58.454471 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:53:58.454542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:53:58.454577 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:53:58.457740 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:53:58.457780 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:53:58.460443 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:53:58.460496 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:53:58.464283 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:53:58.486671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:53:58.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.486755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:53:58.489834 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:53:58.489881 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:53:58.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.494856 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:53:58.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.495019 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:53:58.498693 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:53:58.498790 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:53:58.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.504843 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:53:58.509545 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:53:58.529433 systemd[1]: Switching root. Feb 9 18:53:58.558250 systemd-journald[184]: Journal stopped Feb 9 18:54:02.477544 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 9 18:54:02.477774 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:54:02.477800 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:54:02.478081 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:54:02.478138 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:54:02.478157 kernel: SELinux: policy capability open_perms=1 Feb 9 18:54:02.478180 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:54:02.478265 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:54:02.478330 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:54:02.478363 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:54:02.478386 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:54:02.478500 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:54:02.478697 systemd[1]: Successfully loaded SELinux policy in 58.122ms. Feb 9 18:54:02.478743 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.122ms. Feb 9 18:54:02.482766 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:54:02.482835 systemd[1]: Detected virtualization amazon. Feb 9 18:54:02.482967 systemd[1]: Detected architecture x86-64. Feb 9 18:54:02.482996 systemd[1]: Detected first boot. Feb 9 18:54:02.483026 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:54:02.483081 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:54:02.483104 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:54:02.483153 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:54:02.483176 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:54:02.483198 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:54:02.483255 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:54:02.483275 systemd[1]: Stopped iscsiuio.service. Feb 9 18:54:02.483682 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:54:02.483719 systemd[1]: Stopped iscsid.service. Feb 9 18:54:02.483741 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:54:02.483763 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:54:02.483785 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:54:02.483807 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:54:02.483828 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:54:02.483854 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 18:54:02.483876 systemd[1]: Created slice system-getty.slice. Feb 9 18:54:02.483899 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:54:02.483919 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:54:02.483941 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:54:02.483961 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:54:02.483982 systemd[1]: Created slice user.slice. Feb 9 18:54:02.484003 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:54:02.484135 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:54:02.484160 systemd[1]: Set up automount boot.automount. Feb 9 18:54:02.484208 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:54:02.484231 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:54:02.484251 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:54:02.484272 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:54:02.484293 systemd[1]: Reached target integritysetup.target. Feb 9 18:54:02.484323 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:54:02.484349 systemd[1]: Reached target remote-fs.target. Feb 9 18:54:02.484368 systemd[1]: Reached target slices.target. Feb 9 18:54:02.484389 systemd[1]: Reached target swap.target. Feb 9 18:54:02.484410 systemd[1]: Reached target torcx.target. Feb 9 18:54:02.484431 systemd[1]: Reached target veritysetup.target. Feb 9 18:54:02.484453 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:54:02.484664 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:54:02.486463 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:54:02.486507 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:54:02.486526 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:54:02.486545 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:54:02.486569 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:54:02.486588 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:54:02.486614 systemd[1]: Mounting media.mount... Feb 9 18:54:02.486633 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:54:02.486655 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:54:02.486675 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:54:02.486693 systemd[1]: Mounting tmp.mount... Feb 9 18:54:02.486712 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:54:02.486730 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:54:02.486750 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:54:02.486768 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:54:02.486787 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:54:02.486812 systemd[1]: Starting modprobe@drm.service... Feb 9 18:54:02.486834 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:54:02.486852 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:54:02.486872 systemd[1]: Starting modprobe@loop.service... Feb 9 18:54:02.487047 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:54:02.487072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:54:02.487093 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:54:02.487112 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:54:02.487131 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:54:02.487153 systemd[1]: Stopped systemd-journald.service. Feb 9 18:54:02.487180 systemd[1]: Starting systemd-journald.service... Feb 9 18:54:02.487198 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:54:02.487217 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:54:02.487239 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:54:02.487257 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:54:02.487277 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:54:02.487295 systemd[1]: Stopped verity-setup.service. Feb 9 18:54:02.487315 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 18:54:02.487334 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:54:02.487355 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:54:02.487379 systemd[1]: Mounted media.mount. Feb 9 18:54:02.487398 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:54:02.487512 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:54:02.487537 systemd[1]: Mounted tmp.mount. Feb 9 18:54:02.487568 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:54:02.487594 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:54:02.487615 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:54:02.487632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:54:02.487653 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:54:02.487670 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:54:02.487688 kernel: fuse: init (API version 7.34) Feb 9 18:54:02.487708 systemd[1]: Finished modprobe@drm.service. Feb 9 18:54:02.487728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:54:02.487750 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:54:02.487770 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:54:02.487788 kernel: loop: module loaded Feb 9 18:54:02.487811 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:54:02.487832 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:54:02.487852 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:54:02.487882 systemd-journald[1326]: Journal started Feb 9 18:54:02.487964 systemd-journald[1326]: Runtime Journal (/run/log/journal/ec280cd2a191bb4e971b027337e4afc4) is 4.8M, max 38.7M, 33.9M free. Feb 9 18:54:02.488020 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:53:58.749000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:53:58.825000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:53:58.825000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:53:58.825000 audit: BPF prog-id=10 op=LOAD Feb 9 18:53:58.825000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:53:58.825000 audit: BPF prog-id=11 op=LOAD Feb 9 18:53:58.825000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:53:58.963000 audit[1249]: AVC avc: denied { associate } for pid=1249 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:53:58.963000 audit[1249]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1232 pid=1249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:53:58.963000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:53:58.966000 audit[1249]: AVC avc: denied { associate } for pid=1249 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:53:58.966000 audit[1249]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179b9 a2=1ed a3=0 items=2 ppid=1232 pid=1249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:53:58.966000 audit: CWD cwd="/" Feb 9 18:53:58.966000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:53:58.966000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:53:58.966000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:54:01.947000 audit: BPF prog-id=12 op=LOAD Feb 9 18:54:01.947000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:54:01.947000 audit: BPF prog-id=13 op=LOAD Feb 9 18:54:01.947000 audit: BPF prog-id=14 op=LOAD Feb 9 18:54:01.947000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:54:01.947000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:54:01.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:01.952000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:54:01.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:01.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:01.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:01.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.366000 audit: BPF prog-id=15 op=LOAD Feb 9 18:54:02.366000 audit: BPF prog-id=16 op=LOAD Feb 9 18:54:02.366000 audit: BPF prog-id=17 op=LOAD Feb 9 18:54:02.366000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:54:02.366000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:54:02.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.490564 systemd[1]: Finished modprobe@loop.service. Feb 9 18:54:02.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.471000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:54:02.471000 audit[1326]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffdeb89fb00 a2=4000 a3=7ffdeb89fb9c items=0 ppid=1 pid=1326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:54:02.471000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:54:02.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.960879 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:54:01.944896 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:53:58.961753 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:54:01.950706 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:53:58.961782 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:54:02.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.961825 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:53:58.961841 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:53:58.961885 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:53:58.961963 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:53:58.962216 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:53:58.962265 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:53:58.962284 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:53:58.963138 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:53:58.963193 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:53:58.963224 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:54:02.495692 systemd[1]: Started systemd-journald.service. Feb 9 18:53:58.963247 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:54:02.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:53:58.963275 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:53:58.963299 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:53:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:54:01.366475 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:54:01Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:54:01.366739 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:54:01Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:54:01.366914 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:54:01Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:54:01.367101 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:54:01Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:54:01.367151 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:54:01Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:54:01.367207 /usr/lib/systemd/system-generators/torcx-generator[1249]: time="2024-02-09T18:54:01Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:54:02.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.498587 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:54:02.501022 systemd[1]: Reached target network-pre.target. Feb 9 18:54:02.507820 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:54:02.515012 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:54:02.516047 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:54:02.523237 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:54:02.532963 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:54:02.534175 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:54:02.536452 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:54:02.537766 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:54:02.539695 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:54:02.542564 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:54:02.543988 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:54:02.571180 systemd-journald[1326]: Time spent on flushing to /var/log/journal/ec280cd2a191bb4e971b027337e4afc4 is 107.153ms for 1210 entries. Feb 9 18:54:02.571180 systemd-journald[1326]: System Journal (/var/log/journal/ec280cd2a191bb4e971b027337e4afc4) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:54:02.701884 systemd-journald[1326]: Received client request to flush runtime journal. Feb 9 18:54:02.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.588236 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:54:02.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.589697 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:54:02.634466 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:54:02.661913 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:54:02.666161 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:54:02.691621 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:54:02.695645 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:54:02.703283 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:54:02.719878 udevadm[1365]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:54:02.726202 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:54:02.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:02.729390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:54:02.788518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:54:02.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.625204 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:54:03.636967 kernel: kauditd_printk_skb: 96 callbacks suppressed Feb 9 18:54:03.637376 kernel: audit: type=1130 audit(1707504843.625:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.637434 kernel: audit: type=1334 audit(1707504843.634:135): prog-id=18 op=LOAD Feb 9 18:54:03.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.634000 audit: BPF prog-id=18 op=LOAD Feb 9 18:54:03.635000 audit: BPF prog-id=19 op=LOAD Feb 9 18:54:03.635000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:54:03.635000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:54:03.638623 systemd[1]: Starting systemd-udevd.service... Feb 9 18:54:03.644562 kernel: audit: type=1334 audit(1707504843.635:136): prog-id=19 op=LOAD Feb 9 18:54:03.644673 kernel: audit: type=1334 audit(1707504843.635:137): prog-id=7 op=UNLOAD Feb 9 18:54:03.644705 kernel: audit: type=1334 audit(1707504843.635:138): prog-id=8 op=UNLOAD Feb 9 18:54:03.669300 systemd-udevd[1368]: Using default interface naming scheme 'v252'. Feb 9 18:54:03.717257 systemd[1]: Started systemd-udevd.service. Feb 9 18:54:03.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.721910 systemd[1]: Starting systemd-networkd.service... Feb 9 18:54:03.736744 kernel: audit: type=1130 audit(1707504843.717:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.736863 kernel: audit: type=1334 audit(1707504843.719:140): prog-id=20 op=LOAD Feb 9 18:54:03.719000 audit: BPF prog-id=20 op=LOAD Feb 9 18:54:03.762843 kernel: audit: type=1334 audit(1707504843.746:141): prog-id=21 op=LOAD Feb 9 18:54:03.763010 kernel: audit: type=1334 audit(1707504843.747:142): prog-id=22 op=LOAD Feb 9 18:54:03.763051 kernel: audit: type=1334 audit(1707504843.747:143): prog-id=23 op=LOAD Feb 9 18:54:03.746000 audit: BPF prog-id=21 op=LOAD Feb 9 18:54:03.747000 audit: BPF prog-id=22 op=LOAD Feb 9 18:54:03.747000 audit: BPF prog-id=23 op=LOAD Feb 9 18:54:03.762381 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:54:03.828200 systemd[1]: Started systemd-userdbd.service. Feb 9 18:54:03.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.851367 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 18:54:03.867450 (udev-worker)[1377]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:54:03.959699 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 18:54:03.968075 systemd-networkd[1374]: lo: Link UP Feb 9 18:54:03.968088 systemd-networkd[1374]: lo: Gained carrier Feb 9 18:54:03.968780 systemd-networkd[1374]: Enumeration completed Feb 9 18:54:03.968909 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:54:03.968949 systemd[1]: Started systemd-networkd.service. Feb 9 18:54:03.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:03.972278 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:54:03.981516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:54:03.982355 systemd-networkd[1374]: eth0: Link UP Feb 9 18:54:03.982754 systemd-networkd[1374]: eth0: Gained carrier Feb 9 18:54:03.995599 kernel: ACPI: button: Power Button [PWRF] Feb 9 18:54:03.998045 systemd-networkd[1374]: eth0: DHCPv4 address 172.31.24.123/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 18:54:04.008512 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 9 18:54:04.014506 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 18:54:04.016000 audit[1379]: AVC avc: denied { confidentiality } for pid=1379 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:54:04.016000 audit[1379]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f474bc3660 a1=32194 a2=7fb60e391bc5 a3=5 items=108 ppid=1368 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:54:04.016000 audit: CWD cwd="/" Feb 9 18:54:04.016000 audit: PATH item=0 name=(null) inode=1033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=1 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=2 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=3 name=(null) inode=14770 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=4 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=5 name=(null) inode=14771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=6 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=7 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=8 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=9 name=(null) inode=14773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=10 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=11 name=(null) inode=14774 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=12 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=13 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=14 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=15 name=(null) inode=14776 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=16 name=(null) inode=14772 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=17 name=(null) inode=14777 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=18 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=19 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=20 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=21 name=(null) inode=14779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=22 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=23 name=(null) inode=14780 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=24 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=25 name=(null) inode=14781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=26 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=27 name=(null) inode=14782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=28 name=(null) inode=14778 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=29 name=(null) inode=14783 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=30 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=31 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=32 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=33 name=(null) inode=14785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=34 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=35 name=(null) inode=14786 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=36 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=37 name=(null) inode=14787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=38 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=39 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=40 name=(null) inode=14784 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=41 name=(null) inode=14789 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=42 name=(null) inode=14769 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=43 name=(null) inode=14790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=44 name=(null) inode=14790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=45 name=(null) inode=14791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=46 name=(null) inode=14790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=47 name=(null) inode=14792 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=48 name=(null) inode=14790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=49 name=(null) inode=14793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=50 name=(null) inode=14790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=51 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=52 name=(null) inode=14790 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=53 name=(null) inode=14795 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=54 name=(null) inode=1033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=55 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=56 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=57 name=(null) inode=14797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=58 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=59 name=(null) inode=14798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=60 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=61 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=62 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=63 name=(null) inode=14800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=64 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=65 name=(null) inode=14801 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=66 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=67 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=68 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=69 name=(null) inode=14803 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=70 name=(null) inode=14799 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=71 name=(null) inode=14804 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=72 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=73 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=74 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=75 name=(null) inode=14806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=76 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=77 name=(null) inode=14807 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=78 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=79 name=(null) inode=14808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=80 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=81 name=(null) inode=14809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=82 name=(null) inode=14805 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=83 name=(null) inode=14810 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=84 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=85 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=86 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=87 name=(null) inode=14812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=88 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=89 name=(null) inode=14813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=90 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=91 name=(null) inode=14814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=92 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=93 name=(null) inode=14815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=94 name=(null) inode=14811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=95 name=(null) inode=14816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=96 name=(null) inode=14796 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=97 name=(null) inode=14817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=98 name=(null) inode=14817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=99 name=(null) inode=14818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=100 name=(null) inode=14817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=101 name=(null) inode=14819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=102 name=(null) inode=14817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=103 name=(null) inode=14820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=104 name=(null) inode=14817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=105 name=(null) inode=14821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=106 name=(null) inode=14817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PATH item=107 name=(null) inode=14822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:54:04.016000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:54:04.058504 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 18:54:04.069565 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 9 18:54:04.096554 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1373) Feb 9 18:54:04.141509 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:54:04.289295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:54:04.401937 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:54:04.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.408190 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:54:04.438716 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:54:04.482449 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:54:04.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.486694 systemd[1]: Reached target cryptsetup.target. Feb 9 18:54:04.492244 systemd[1]: Starting lvm2-activation.service... Feb 9 18:54:04.501460 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:54:04.540126 systemd[1]: Finished lvm2-activation.service. Feb 9 18:54:04.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.541520 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:54:04.542971 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:54:04.543005 systemd[1]: Reached target local-fs.target. Feb 9 18:54:04.544202 systemd[1]: Reached target machines.target. Feb 9 18:54:04.547516 systemd[1]: Starting ldconfig.service... Feb 9 18:54:04.549431 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:54:04.549512 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:54:04.551076 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:54:04.554135 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:54:04.557598 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:54:04.561553 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:54:04.562148 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:54:04.565287 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:54:04.579340 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1485 (bootctl) Feb 9 18:54:04.582012 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:54:04.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.617382 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:54:04.621563 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:54:04.624934 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:54:04.628774 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:54:04.762689 systemd-fsck[1493]: fsck.fat 4.2 (2021-01-31) Feb 9 18:54:04.762689 systemd-fsck[1493]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 9 18:54:04.765425 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:54:04.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.768972 systemd[1]: Mounting boot.mount... Feb 9 18:54:04.791730 systemd[1]: Mounted boot.mount. Feb 9 18:54:04.835595 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:54:04.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.926369 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:54:04.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.929637 systemd[1]: Starting audit-rules.service... Feb 9 18:54:04.938000 audit: BPF prog-id=24 op=LOAD Feb 9 18:54:04.942000 audit: BPF prog-id=25 op=LOAD Feb 9 18:54:04.932753 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:54:04.937076 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:54:04.941589 systemd[1]: Starting systemd-resolved.service... Feb 9 18:54:04.945398 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:54:04.948826 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:54:04.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.966553 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:54:04.968967 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:54:04.974000 audit[1512]: SYSTEM_BOOT pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:04.984232 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:54:05.034759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:54:05.040826 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:54:05.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:54:05.114312 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:54:05.115321 augenrules[1530]: No rules Feb 9 18:54:05.113000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:54:05.113000 audit[1530]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4b64b100 a2=420 a3=0 items=0 ppid=1507 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:54:05.113000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:54:05.117168 systemd[1]: Finished audit-rules.service. Feb 9 18:54:05.126508 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:54:05.127857 systemd[1]: Reached target time-set.target. Feb 9 18:54:05.167381 systemd-resolved[1510]: Positive Trust Anchors: Feb 9 18:54:05.167397 systemd-resolved[1510]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:54:05.167439 systemd-resolved[1510]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:54:05.192256 systemd-resolved[1510]: Defaulting to hostname 'linux'. Feb 9 18:54:05.194108 systemd-timesyncd[1511]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Feb 9 18:54:05.194962 systemd-timesyncd[1511]: Initial clock synchronization to Fri 2024-02-09 18:54:05.056048 UTC. Feb 9 18:54:05.196174 systemd[1]: Started systemd-resolved.service. Feb 9 18:54:05.197377 systemd[1]: Reached target network.target. Feb 9 18:54:05.198528 systemd[1]: Reached target nss-lookup.target. Feb 9 18:54:05.253153 ldconfig[1484]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:54:05.259108 systemd[1]: Finished ldconfig.service. Feb 9 18:54:05.261950 systemd[1]: Starting systemd-update-done.service... Feb 9 18:54:05.271339 systemd[1]: Finished systemd-update-done.service. Feb 9 18:54:05.272946 systemd[1]: Reached target sysinit.target. Feb 9 18:54:05.274217 systemd[1]: Started motdgen.path. Feb 9 18:54:05.275261 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:54:05.276917 systemd[1]: Started logrotate.timer. Feb 9 18:54:05.278012 systemd[1]: Started mdadm.timer. Feb 9 18:54:05.279026 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:54:05.279969 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:54:05.279998 systemd[1]: Reached target paths.target. Feb 9 18:54:05.281185 systemd[1]: Reached target timers.target. Feb 9 18:54:05.282607 systemd[1]: Listening on dbus.socket. Feb 9 18:54:05.285334 systemd[1]: Starting docker.socket... Feb 9 18:54:05.289142 systemd[1]: Listening on sshd.socket. Feb 9 18:54:05.290361 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:54:05.290914 systemd[1]: Listening on docker.socket. Feb 9 18:54:05.292986 systemd[1]: Reached target sockets.target. Feb 9 18:54:05.294174 systemd[1]: Reached target basic.target. Feb 9 18:54:05.296754 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:54:05.296785 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:54:05.298317 systemd[1]: Starting containerd.service... Feb 9 18:54:05.300907 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 18:54:05.305958 systemd[1]: Starting dbus.service... Feb 9 18:54:05.308669 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:54:05.315618 systemd[1]: Starting extend-filesystems.service... Feb 9 18:54:05.316992 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:54:05.319648 systemd[1]: Starting motdgen.service... Feb 9 18:54:05.322527 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:54:05.324908 systemd[1]: Starting prepare-critools.service... Feb 9 18:54:05.328017 systemd[1]: Starting prepare-helm.service... Feb 9 18:54:05.330047 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:54:05.332547 systemd[1]: Starting sshd-keygen.service... Feb 9 18:54:05.337247 systemd[1]: Starting systemd-logind.service... Feb 9 18:54:05.339599 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:54:05.339659 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:54:05.396345 jq[1542]: false Feb 9 18:54:05.403594 jq[1552]: true Feb 9 18:54:05.340178 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:54:05.341000 systemd[1]: Starting update-engine.service... Feb 9 18:54:05.416352 tar[1557]: linux-amd64/helm Feb 9 18:54:05.344226 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:54:05.420876 tar[1554]: ./ Feb 9 18:54:05.420876 tar[1554]: ./loopback Feb 9 18:54:05.354956 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:54:05.428841 tar[1555]: crictl Feb 9 18:54:05.355166 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:54:05.400521 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:54:05.400833 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:54:05.447750 jq[1563]: true Feb 9 18:54:05.456090 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:54:05.456347 systemd[1]: Finished motdgen.service. Feb 9 18:54:05.476702 dbus-daemon[1541]: [system] SELinux support is enabled Feb 9 18:54:05.479065 systemd[1]: Started dbus.service. Feb 9 18:54:05.483895 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:54:05.483934 systemd[1]: Reached target system-config.target. Feb 9 18:54:05.485108 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:54:05.485134 systemd[1]: Reached target user-config.target. Feb 9 18:54:05.497092 dbus-daemon[1541]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1374 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 18:54:05.501773 extend-filesystems[1543]: Found nvme0n1 Feb 9 18:54:05.506655 extend-filesystems[1543]: Found nvme0n1p1 Feb 9 18:54:05.512809 extend-filesystems[1543]: Found nvme0n1p2 Feb 9 18:54:05.514987 extend-filesystems[1543]: Found nvme0n1p3 Feb 9 18:54:05.517822 extend-filesystems[1543]: Found usr Feb 9 18:54:05.519877 extend-filesystems[1543]: Found nvme0n1p4 Feb 9 18:54:05.521078 extend-filesystems[1543]: Found nvme0n1p6 Feb 9 18:54:05.522006 extend-filesystems[1543]: Found nvme0n1p7 Feb 9 18:54:05.522006 extend-filesystems[1543]: Found nvme0n1p9 Feb 9 18:54:05.522006 extend-filesystems[1543]: Checking size of /dev/nvme0n1p9 Feb 9 18:54:05.521212 dbus-daemon[1541]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:54:05.527454 systemd[1]: Starting systemd-hostnamed.service... Feb 9 18:54:05.546419 update_engine[1551]: I0209 18:54:05.545531 1551 main.cc:92] Flatcar Update Engine starting Feb 9 18:54:05.552587 update_engine[1551]: I0209 18:54:05.552438 1551 update_check_scheduler.cc:74] Next update check in 5m46s Feb 9 18:54:05.552823 systemd[1]: Started update-engine.service. Feb 9 18:54:05.556775 systemd[1]: Started locksmithd.service. Feb 9 18:54:05.591724 systemd-networkd[1374]: eth0: Gained IPv6LL Feb 9 18:54:05.594965 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:54:05.598419 systemd[1]: Reached target network-online.target. Feb 9 18:54:05.609829 extend-filesystems[1543]: Resized partition /dev/nvme0n1p9 Feb 9 18:54:05.602633 systemd[1]: Started amazon-ssm-agent.service. Feb 9 18:54:05.606775 systemd[1]: Started nvidia.service. Feb 9 18:54:05.711649 extend-filesystems[1599]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:54:05.726616 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 18:54:05.780230 env[1558]: time="2024-02-09T18:54:05.780171020Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:54:05.941511 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 18:54:05.964218 systemd-logind[1550]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 18:54:05.964655 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 18:54:05.964761 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:54:05.965031 systemd-logind[1550]: New seat seat0. Feb 9 18:54:05.966512 amazon-ssm-agent[1600]: 2024/02/09 18:54:05 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 18:54:05.967168 extend-filesystems[1599]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 18:54:05.967168 extend-filesystems[1599]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:54:05.967168 extend-filesystems[1599]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 18:54:05.993979 extend-filesystems[1543]: Resized filesystem in /dev/nvme0n1p9 Feb 9 18:54:05.995323 amazon-ssm-agent[1600]: Initializing new seelog logger Feb 9 18:54:05.995323 amazon-ssm-agent[1600]: New Seelog Logger Creation Complete Feb 9 18:54:05.995323 amazon-ssm-agent[1600]: 2024/02/09 18:54:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 18:54:05.995323 amazon-ssm-agent[1600]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 18:54:05.995323 amazon-ssm-agent[1600]: 2024/02/09 18:54:05 processing appconfig overrides Feb 9 18:54:05.969211 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:54:05.995866 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:54:05.996070 tar[1554]: ./bandwidth Feb 9 18:54:05.969417 systemd[1]: Finished extend-filesystems.service. Feb 9 18:54:05.983124 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:54:06.003211 systemd[1]: Started systemd-logind.service. Feb 9 18:54:06.005962 env[1558]: time="2024-02-09T18:54:06.005909994Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:54:06.008597 env[1558]: time="2024-02-09T18:54:06.008560602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:54:06.013726 env[1558]: time="2024-02-09T18:54:06.013678855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:54:06.015719 env[1558]: time="2024-02-09T18:54:06.015681783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:54:06.018547 env[1558]: time="2024-02-09T18:54:06.018500558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:54:06.018683 env[1558]: time="2024-02-09T18:54:06.018664935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:54:06.018779 env[1558]: time="2024-02-09T18:54:06.018762577Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:54:06.019008 env[1558]: time="2024-02-09T18:54:06.018989649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:54:06.019174 env[1558]: time="2024-02-09T18:54:06.019157843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:54:06.019616 env[1558]: time="2024-02-09T18:54:06.019593951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:54:06.019912 env[1558]: time="2024-02-09T18:54:06.019888879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:54:06.020002 env[1558]: time="2024-02-09T18:54:06.019987374Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:54:06.020136 env[1558]: time="2024-02-09T18:54:06.020120175Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:54:06.020204 env[1558]: time="2024-02-09T18:54:06.020193037Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031138222Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031191855Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031212767Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031254768Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031279111Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031298027Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031314607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031333901Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031351755Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031369015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.031557 env[1558]: time="2024-02-09T18:54:06.031388447Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032092490Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032253292Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032353344Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032757348Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032797823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032816398Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032875164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032892273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032908157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032924248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032943383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032960034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032978817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.041516 env[1558]: time="2024-02-09T18:54:06.032995732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.036001 systemd[1]: Started containerd.service. Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033016303Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033172117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033192508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033210134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033226006Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033246037Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033261618Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033287828Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:54:06.042188 env[1558]: time="2024-02-09T18:54:06.033335125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.033641878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.033723027Z" level=info msg="Connect containerd service" Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.033772588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.034425566Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.035794492Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.035847504Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:54:06.042755 env[1558]: time="2024-02-09T18:54:06.035912038Z" level=info msg="containerd successfully booted in 0.263705s" Feb 9 18:54:06.052434 env[1558]: time="2024-02-09T18:54:06.052114863Z" level=info msg="Start subscribing containerd event" Feb 9 18:54:06.105832 env[1558]: time="2024-02-09T18:54:06.105786010Z" level=info msg="Start recovering state" Feb 9 18:54:06.106115 env[1558]: time="2024-02-09T18:54:06.106094243Z" level=info msg="Start event monitor" Feb 9 18:54:06.106757 env[1558]: time="2024-02-09T18:54:06.106731307Z" level=info msg="Start snapshots syncer" Feb 9 18:54:06.106966 env[1558]: time="2024-02-09T18:54:06.106945386Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:54:06.107052 env[1558]: time="2024-02-09T18:54:06.107039052Z" level=info msg="Start streaming server" Feb 9 18:54:06.162749 tar[1554]: ./ptp Feb 9 18:54:06.275879 dbus-daemon[1541]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 18:54:06.276155 systemd[1]: Started systemd-hostnamed.service. Feb 9 18:54:06.279849 dbus-daemon[1541]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1584 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 18:54:06.284420 systemd[1]: Starting polkit.service... Feb 9 18:54:06.298565 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:54:06.332477 polkitd[1670]: Started polkitd version 121 Feb 9 18:54:06.362799 tar[1554]: ./vlan Feb 9 18:54:06.363610 polkitd[1670]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 18:54:06.363690 polkitd[1670]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 18:54:06.371116 polkitd[1670]: Finished loading, compiling and executing 2 rules Feb 9 18:54:06.372831 dbus-daemon[1541]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 18:54:06.373114 systemd[1]: Started polkit.service. Feb 9 18:54:06.377349 polkitd[1670]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 18:54:06.420211 systemd-hostnamed[1584]: Hostname set to (transient) Feb 9 18:54:06.420325 systemd-resolved[1510]: System hostname changed to 'ip-172-31-24-123'. Feb 9 18:54:06.575904 tar[1554]: ./host-device Feb 9 18:54:06.676280 coreos-metadata[1540]: Feb 09 18:54:06.675 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 18:54:06.685290 coreos-metadata[1540]: Feb 09 18:54:06.685 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 18:54:06.686141 coreos-metadata[1540]: Feb 09 18:54:06.686 INFO Fetch successful Feb 9 18:54:06.686280 coreos-metadata[1540]: Feb 09 18:54:06.686 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 18:54:06.687062 coreos-metadata[1540]: Feb 09 18:54:06.686 INFO Fetch successful Feb 9 18:54:06.690887 unknown[1540]: wrote ssh authorized keys file for user: core Feb 9 18:54:06.726896 update-ssh-keys[1725]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:54:06.727805 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 18:54:06.803453 tar[1554]: ./tuning Feb 9 18:54:06.843812 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Create new startup processor Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing bookkeeping folders Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO removing the completed state files Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing bookkeeping folders for long running plugins Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing healthcheck folders for long running plugins Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing locations for inventory plugin Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing default location for custom inventory Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing default location for file inventory Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Initializing default location for role inventory Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Init the cloudwatchlogs publisher Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:configurePackage Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:configureDocker Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:runDocument Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 18:54:06.856877 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform independent plugin aws:downloadContent Feb 9 18:54:06.858451 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 18:54:06.858451 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 18:54:06.858451 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO OS: linux, Arch: amd64 Feb 9 18:54:06.868014 amazon-ssm-agent[1600]: datastore file /var/lib/amazon/ssm/i-08d8aa34b4746708d/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 18:54:06.942942 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 18:54:06.948162 tar[1554]: ./vrf Feb 9 18:54:07.037609 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 18:54:07.065603 tar[1554]: ./sbr Feb 9 18:54:07.132221 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 18:54:07.182398 tar[1554]: ./tap Feb 9 18:54:07.226675 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 18:54:07.312574 tar[1554]: ./dhcp Feb 9 18:54:07.321352 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 18:54:07.416292 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 18:54:07.511303 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] listening reply. Feb 9 18:54:07.573531 tar[1554]: ./static Feb 9 18:54:07.606624 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-08d8aa34b4746708d, requestId: 12e84dc9-8f52-4e8a-b84c-3b1980719544 Feb 9 18:54:07.631669 tar[1557]: linux-amd64/LICENSE Feb 9 18:54:07.631669 tar[1557]: linux-amd64/README.md Feb 9 18:54:07.645670 systemd[1]: Finished prepare-helm.service. Feb 9 18:54:07.662573 tar[1554]: ./firewall Feb 9 18:54:07.702143 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] Starting message polling Feb 9 18:54:07.739826 systemd[1]: Finished prepare-critools.service. Feb 9 18:54:07.759010 tar[1554]: ./macvlan Feb 9 18:54:07.797752 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 18:54:07.813091 tar[1554]: ./dummy Feb 9 18:54:07.862147 tar[1554]: ./bridge Feb 9 18:54:07.893693 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [instanceID=i-08d8aa34b4746708d] Starting association polling Feb 9 18:54:07.918030 tar[1554]: ./ipvlan Feb 9 18:54:07.968838 tar[1554]: ./portmap Feb 9 18:54:07.989748 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 18:54:08.016072 tar[1554]: ./host-local Feb 9 18:54:08.076642 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:54:08.086002 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 18:54:08.175371 locksmithd[1591]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:54:08.183137 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 18:54:08.279896 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [OfflineService] Starting document processing engine... Feb 9 18:54:08.376753 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [OfflineService] [EngineProcessor] Starting Feb 9 18:54:08.473915 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 18:54:08.571223 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [OfflineService] Starting message polling Feb 9 18:54:08.668619 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [OfflineService] Starting send replies to MDS Feb 9 18:54:08.766277 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 18:54:08.864171 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 18:54:08.962264 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 18:54:09.060403 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [StartupProcessor] Executing startup processor tasks Feb 9 18:54:09.158873 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 18:54:09.257460 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 18:54:09.356294 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 18:54:09.455391 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 18:54:09.554532 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 18:54:09.653944 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 18:54:09.753660 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-08d8aa34b4746708d?role=subscribe&stream=input Feb 9 18:54:09.853493 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-08d8aa34b4746708d?role=subscribe&stream=input Feb 9 18:54:09.953537 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 18:54:10.053786 amazon-ssm-agent[1600]: 2024-02-09 18:54:06 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 18:54:10.235224 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:54:10.261954 systemd[1]: Finished sshd-keygen.service. Feb 9 18:54:10.265313 systemd[1]: Starting issuegen.service... Feb 9 18:54:10.272821 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:54:10.273019 systemd[1]: Finished issuegen.service. Feb 9 18:54:10.276579 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:54:10.285561 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:54:10.288847 systemd[1]: Started getty@tty1.service. Feb 9 18:54:10.292304 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 18:54:10.293596 systemd[1]: Reached target getty.target. Feb 9 18:54:10.294708 systemd[1]: Reached target multi-user.target. Feb 9 18:54:10.297244 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:54:10.306934 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:54:10.307097 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:54:10.308681 systemd[1]: Startup finished in 766ms (kernel) + 29.842s (initrd) + 11.627s (userspace) = 42.237s. Feb 9 18:54:14.385143 systemd[1]: Created slice system-sshd.slice. Feb 9 18:54:14.388086 systemd[1]: Started sshd@0-172.31.24.123:22-139.178.68.195:54680.service. Feb 9 18:54:14.584735 sshd[1759]: Accepted publickey for core from 139.178.68.195 port 54680 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:54:14.588453 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:54:14.607831 systemd[1]: Created slice user-500.slice. Feb 9 18:54:14.609689 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:54:14.616761 systemd-logind[1550]: New session 1 of user core. Feb 9 18:54:14.627539 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:54:14.630435 systemd[1]: Starting user@500.service... Feb 9 18:54:14.635201 (systemd)[1762]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:54:14.771457 systemd[1762]: Queued start job for default target default.target. Feb 9 18:54:14.772139 systemd[1762]: Reached target paths.target. Feb 9 18:54:14.772261 systemd[1762]: Reached target sockets.target. Feb 9 18:54:14.772291 systemd[1762]: Reached target timers.target. Feb 9 18:54:14.772308 systemd[1762]: Reached target basic.target. Feb 9 18:54:14.772432 systemd[1]: Started user@500.service. Feb 9 18:54:14.774896 systemd[1]: Started session-1.scope. Feb 9 18:54:14.775494 systemd[1762]: Reached target default.target. Feb 9 18:54:14.775704 systemd[1762]: Startup finished in 132ms. Feb 9 18:54:14.923656 systemd[1]: Started sshd@1-172.31.24.123:22-139.178.68.195:54682.service. Feb 9 18:54:15.091721 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 54682 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:54:15.093265 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:54:15.098792 systemd[1]: Started session-2.scope. Feb 9 18:54:15.099434 systemd-logind[1550]: New session 2 of user core. Feb 9 18:54:15.233203 sshd[1771]: pam_unix(sshd:session): session closed for user core Feb 9 18:54:15.236580 systemd[1]: sshd@1-172.31.24.123:22-139.178.68.195:54682.service: Deactivated successfully. Feb 9 18:54:15.237453 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:54:15.238204 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:54:15.239076 systemd-logind[1550]: Removed session 2. Feb 9 18:54:15.270495 systemd[1]: Started sshd@2-172.31.24.123:22-139.178.68.195:54698.service. Feb 9 18:54:15.437832 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 54698 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:54:15.439565 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:54:15.444169 systemd-logind[1550]: New session 3 of user core. Feb 9 18:54:15.444752 systemd[1]: Started session-3.scope. Feb 9 18:54:15.569731 sshd[1777]: pam_unix(sshd:session): session closed for user core Feb 9 18:54:15.572699 systemd[1]: sshd@2-172.31.24.123:22-139.178.68.195:54698.service: Deactivated successfully. Feb 9 18:54:15.573643 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:54:15.574260 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:54:15.575105 systemd-logind[1550]: Removed session 3. Feb 9 18:54:15.595110 systemd[1]: Started sshd@3-172.31.24.123:22-139.178.68.195:54702.service. Feb 9 18:54:15.754433 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 54702 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:54:15.755863 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:54:15.761075 systemd[1]: Started session-4.scope. Feb 9 18:54:15.761745 systemd-logind[1550]: New session 4 of user core. Feb 9 18:54:15.888736 sshd[1783]: pam_unix(sshd:session): session closed for user core Feb 9 18:54:15.894857 systemd[1]: sshd@3-172.31.24.123:22-139.178.68.195:54702.service: Deactivated successfully. Feb 9 18:54:15.897209 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:54:15.904944 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:54:15.906094 systemd-logind[1550]: Removed session 4. Feb 9 18:54:15.917137 systemd[1]: Started sshd@4-172.31.24.123:22-139.178.68.195:54706.service. Feb 9 18:54:16.092649 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 54706 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:54:16.094760 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:54:16.101417 systemd[1]: Started session-5.scope. Feb 9 18:54:16.102155 systemd-logind[1550]: New session 5 of user core. Feb 9 18:54:16.222231 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:54:16.222538 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:54:17.137795 systemd[1]: Starting docker.service... Feb 9 18:54:17.180894 env[1807]: time="2024-02-09T18:54:17.180853648Z" level=info msg="Starting up" Feb 9 18:54:17.182739 env[1807]: time="2024-02-09T18:54:17.182599493Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:54:17.182739 env[1807]: time="2024-02-09T18:54:17.182624933Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:54:17.182739 env[1807]: time="2024-02-09T18:54:17.182650048Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:54:17.182739 env[1807]: time="2024-02-09T18:54:17.182663725Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:54:17.184870 env[1807]: time="2024-02-09T18:54:17.184833258Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:54:17.184870 env[1807]: time="2024-02-09T18:54:17.184855549Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:54:17.185011 env[1807]: time="2024-02-09T18:54:17.184874369Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:54:17.185011 env[1807]: time="2024-02-09T18:54:17.184887741Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:54:17.191951 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2002454400-merged.mount: Deactivated successfully. Feb 9 18:54:17.253057 env[1807]: time="2024-02-09T18:54:17.253021766Z" level=info msg="Loading containers: start." Feb 9 18:54:17.369510 kernel: Initializing XFRM netlink socket Feb 9 18:54:17.404853 env[1807]: time="2024-02-09T18:54:17.404747622Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:54:17.406352 (udev-worker)[1816]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:54:17.530582 systemd-networkd[1374]: docker0: Link UP Feb 9 18:54:17.543669 env[1807]: time="2024-02-09T18:54:17.543629827Z" level=info msg="Loading containers: done." Feb 9 18:54:17.568429 env[1807]: time="2024-02-09T18:54:17.568373952Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:54:17.568682 env[1807]: time="2024-02-09T18:54:17.568641553Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:54:17.568788 env[1807]: time="2024-02-09T18:54:17.568762459Z" level=info msg="Daemon has completed initialization" Feb 9 18:54:17.588540 systemd[1]: Started docker.service. Feb 9 18:54:17.600608 env[1807]: time="2024-02-09T18:54:17.600541118Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:54:17.619848 systemd[1]: Reloading. Feb 9 18:54:17.702103 /usr/lib/systemd/system-generators/torcx-generator[1942]: time="2024-02-09T18:54:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:54:17.702144 /usr/lib/systemd/system-generators/torcx-generator[1942]: time="2024-02-09T18:54:17Z" level=info msg="torcx already run" Feb 9 18:54:17.802312 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:54:17.802335 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:54:17.825318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:54:17.925758 systemd[1]: Started kubelet.service. Feb 9 18:54:18.002747 kubelet[1995]: E0209 18:54:18.002689 1995 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:54:18.005754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:54:18.005875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:54:18.660336 env[1558]: time="2024-02-09T18:54:18.660284463Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 18:54:19.314810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285991084.mount: Deactivated successfully. Feb 9 18:54:21.354591 env[1558]: time="2024-02-09T18:54:21.354541152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:21.358042 env[1558]: time="2024-02-09T18:54:21.357998917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:21.360848 env[1558]: time="2024-02-09T18:54:21.360806182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:21.363442 env[1558]: time="2024-02-09T18:54:21.363405449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:21.364191 env[1558]: time="2024-02-09T18:54:21.364148575Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 9 18:54:21.380213 env[1558]: time="2024-02-09T18:54:21.380145560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 18:54:23.816136 env[1558]: time="2024-02-09T18:54:23.816073077Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:23.819221 env[1558]: time="2024-02-09T18:54:23.819175546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:23.822124 env[1558]: time="2024-02-09T18:54:23.822080200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:23.825153 env[1558]: time="2024-02-09T18:54:23.825111966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:23.826174 env[1558]: time="2024-02-09T18:54:23.826134891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 9 18:54:23.841110 env[1558]: time="2024-02-09T18:54:23.841065351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 18:54:25.420127 env[1558]: time="2024-02-09T18:54:25.420077393Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:25.422982 env[1558]: time="2024-02-09T18:54:25.422939153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:25.427022 env[1558]: time="2024-02-09T18:54:25.426975284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:25.429539 env[1558]: time="2024-02-09T18:54:25.429502925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:25.430473 env[1558]: time="2024-02-09T18:54:25.430436267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 9 18:54:25.442809 env[1558]: time="2024-02-09T18:54:25.442766363Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 18:54:26.843261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451065526.mount: Deactivated successfully. Feb 9 18:54:27.505935 env[1558]: time="2024-02-09T18:54:27.505852663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:27.509523 env[1558]: time="2024-02-09T18:54:27.509468841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:27.511660 env[1558]: time="2024-02-09T18:54:27.511623769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:27.514089 env[1558]: time="2024-02-09T18:54:27.514057222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:27.514565 env[1558]: time="2024-02-09T18:54:27.514531577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 18:54:27.526792 env[1558]: time="2024-02-09T18:54:27.526756152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:54:28.021994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:54:28.022217 systemd[1]: Stopped kubelet.service. Feb 9 18:54:28.024280 systemd[1]: Started kubelet.service. Feb 9 18:54:28.047416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361823914.mount: Deactivated successfully. Feb 9 18:54:28.058218 env[1558]: time="2024-02-09T18:54:28.058169025Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:28.062804 env[1558]: time="2024-02-09T18:54:28.062763215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:28.067815 env[1558]: time="2024-02-09T18:54:28.067768849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:28.071090 env[1558]: time="2024-02-09T18:54:28.071048981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:28.072136 env[1558]: time="2024-02-09T18:54:28.072091061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 18:54:28.089191 env[1558]: time="2024-02-09T18:54:28.089150038Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 18:54:28.118271 kubelet[2033]: E0209 18:54:28.118184 2033 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:54:28.122731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:54:28.122964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:54:29.013191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013727445.mount: Deactivated successfully. Feb 9 18:54:31.361604 amazon-ssm-agent[1600]: 2024-02-09 18:54:31 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 18:54:35.045809 env[1558]: time="2024-02-09T18:54:35.045746322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:35.049848 env[1558]: time="2024-02-09T18:54:35.049803076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:35.052588 env[1558]: time="2024-02-09T18:54:35.052552403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:35.055445 env[1558]: time="2024-02-09T18:54:35.055404350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:35.056461 env[1558]: time="2024-02-09T18:54:35.056422520Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 9 18:54:35.077957 env[1558]: time="2024-02-09T18:54:35.077914949Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 18:54:35.710195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638713231.mount: Deactivated successfully. Feb 9 18:54:36.452841 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 18:54:36.593056 env[1558]: time="2024-02-09T18:54:36.593006242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:36.596289 env[1558]: time="2024-02-09T18:54:36.596195221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:36.598445 env[1558]: time="2024-02-09T18:54:36.598407449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:36.600594 env[1558]: time="2024-02-09T18:54:36.600557329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:36.601274 env[1558]: time="2024-02-09T18:54:36.601239102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 18:54:38.229156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 18:54:38.229433 systemd[1]: Stopped kubelet.service. Feb 9 18:54:38.234618 systemd[1]: Started kubelet.service. Feb 9 18:54:38.332781 kubelet[2114]: E0209 18:54:38.332731 2114 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 18:54:38.335083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:54:38.335254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:54:39.374968 systemd[1]: Stopped kubelet.service. Feb 9 18:54:39.401678 systemd[1]: Reloading. Feb 9 18:54:39.499323 /usr/lib/systemd/system-generators/torcx-generator[2144]: time="2024-02-09T18:54:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:54:39.511625 /usr/lib/systemd/system-generators/torcx-generator[2144]: time="2024-02-09T18:54:39Z" level=info msg="torcx already run" Feb 9 18:54:39.636066 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:54:39.636094 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:54:39.662630 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:54:39.837705 systemd[1]: Started kubelet.service. Feb 9 18:54:39.921925 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:54:39.921925 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:54:39.921925 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:54:39.922522 kubelet[2196]: I0209 18:54:39.922288 2196 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:54:40.082997 kubelet[2196]: I0209 18:54:40.082946 2196 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 18:54:40.082997 kubelet[2196]: I0209 18:54:40.082994 2196 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:54:40.083457 kubelet[2196]: I0209 18:54:40.083437 2196 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 18:54:40.105252 kubelet[2196]: E0209 18:54:40.105217 2196 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.105632 kubelet[2196]: I0209 18:54:40.105613 2196 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:54:40.107287 kubelet[2196]: I0209 18:54:40.107253 2196 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:54:40.107565 kubelet[2196]: I0209 18:54:40.107547 2196 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:54:40.107674 kubelet[2196]: I0209 18:54:40.107638 2196 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:54:40.107791 kubelet[2196]: I0209 18:54:40.107685 2196 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:54:40.107791 kubelet[2196]: I0209 18:54:40.107702 2196 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 18:54:40.107885 kubelet[2196]: I0209 18:54:40.107864 2196 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:54:40.115140 kubelet[2196]: I0209 18:54:40.115110 2196 kubelet.go:405] "Attempting to sync node with API server" Feb 9 18:54:40.115140 kubelet[2196]: I0209 18:54:40.115140 2196 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:54:40.115353 kubelet[2196]: I0209 18:54:40.115168 2196 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:54:40.115353 kubelet[2196]: I0209 18:54:40.115183 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:54:40.116248 kubelet[2196]: W0209 18:54:40.116200 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.24.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.116372 kubelet[2196]: E0209 18:54:40.116260 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.116372 kubelet[2196]: I0209 18:54:40.116350 2196 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:54:40.116697 kubelet[2196]: W0209 18:54:40.116680 2196 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:54:40.117200 kubelet[2196]: I0209 18:54:40.117181 2196 server.go:1168] "Started kubelet" Feb 9 18:54:40.117334 kubelet[2196]: W0209 18:54:40.117294 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.24.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-123&limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.117397 kubelet[2196]: E0209 18:54:40.117349 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-123&limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.121232 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:54:40.121386 kubelet[2196]: I0209 18:54:40.121364 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:54:40.125281 kubelet[2196]: E0209 18:54:40.125108 2196 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-24-123.17b246a447a10156", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-24-123", UID:"ip-172-31-24-123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-24-123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 54, 40, 117154134, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 54, 40, 117154134, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.24.123:6443/api/v1/namespaces/default/events": dial tcp 172.31.24.123:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:54:40.125469 kubelet[2196]: E0209 18:54:40.125445 2196 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:54:40.125546 kubelet[2196]: E0209 18:54:40.125471 2196 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:54:40.127150 kubelet[2196]: I0209 18:54:40.127127 2196 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:54:40.127770 kubelet[2196]: I0209 18:54:40.127746 2196 server.go:461] "Adding debug handlers to kubelet server" Feb 9 18:54:40.129135 kubelet[2196]: I0209 18:54:40.129109 2196 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:54:40.130694 kubelet[2196]: I0209 18:54:40.129760 2196 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 18:54:40.130694 kubelet[2196]: I0209 18:54:40.129853 2196 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 18:54:40.130694 kubelet[2196]: W0209 18:54:40.130262 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.24.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.130694 kubelet[2196]: E0209 18:54:40.130317 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.130941 kubelet[2196]: E0209 18:54:40.130850 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-123?timeout=10s\": dial tcp 172.31.24.123:6443: connect: connection refused" interval="200ms" Feb 9 18:54:40.149706 kubelet[2196]: I0209 18:54:40.149671 2196 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:54:40.158606 kubelet[2196]: I0209 18:54:40.158580 2196 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:54:40.158849 kubelet[2196]: I0209 18:54:40.158834 2196 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 18:54:40.158977 kubelet[2196]: I0209 18:54:40.158967 2196 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 18:54:40.159285 kubelet[2196]: E0209 18:54:40.159268 2196 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:54:40.161402 kubelet[2196]: W0209 18:54:40.161376 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.24.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.162551 kubelet[2196]: E0209 18:54:40.162528 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:40.168641 kubelet[2196]: I0209 18:54:40.168474 2196 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:54:40.168641 kubelet[2196]: I0209 18:54:40.168638 2196 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:54:40.168815 kubelet[2196]: I0209 18:54:40.168656 2196 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:54:40.171540 kubelet[2196]: I0209 18:54:40.171515 2196 policy_none.go:49] "None policy: Start" Feb 9 18:54:40.174565 kubelet[2196]: I0209 18:54:40.172776 2196 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:54:40.174565 kubelet[2196]: I0209 18:54:40.172812 2196 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:54:40.188654 systemd[1]: Created slice kubepods.slice. Feb 9 18:54:40.194529 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:54:40.199304 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:54:40.205642 kubelet[2196]: I0209 18:54:40.205614 2196 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:54:40.206083 kubelet[2196]: I0209 18:54:40.205866 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:54:40.208553 kubelet[2196]: E0209 18:54:40.208162 2196 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-123\" not found" Feb 9 18:54:40.231884 kubelet[2196]: I0209 18:54:40.231849 2196 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-123" Feb 9 18:54:40.232342 kubelet[2196]: E0209 18:54:40.232317 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.123:6443/api/v1/nodes\": dial tcp 172.31.24.123:6443: connect: connection refused" node="ip-172-31-24-123" Feb 9 18:54:40.259594 kubelet[2196]: I0209 18:54:40.259543 2196 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:54:40.261283 kubelet[2196]: I0209 18:54:40.261258 2196 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:54:40.266686 kubelet[2196]: I0209 18:54:40.266651 2196 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:54:40.278308 systemd[1]: Created slice kubepods-burstable-pod852240fdea7dff95dc79143405b96279.slice. Feb 9 18:54:40.301782 systemd[1]: Created slice kubepods-burstable-pod827d239473ebbfd39d144bee5531520f.slice. Feb 9 18:54:40.307258 systemd[1]: Created slice kubepods-burstable-podb3bacd205a2431a0a31647fe92c712a7.slice. Feb 9 18:54:40.330651 kubelet[2196]: I0209 18:54:40.330603 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:40.330651 kubelet[2196]: I0209 18:54:40.330652 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:40.330900 kubelet[2196]: I0209 18:54:40.330681 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3bacd205a2431a0a31647fe92c712a7-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-123\" (UID: \"b3bacd205a2431a0a31647fe92c712a7\") " pod="kube-system/kube-scheduler-ip-172-31-24-123" Feb 9 18:54:40.330900 kubelet[2196]: I0209 18:54:40.330706 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/852240fdea7dff95dc79143405b96279-ca-certs\") pod \"kube-apiserver-ip-172-31-24-123\" (UID: \"852240fdea7dff95dc79143405b96279\") " pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:40.330900 kubelet[2196]: I0209 18:54:40.330733 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:40.330900 kubelet[2196]: I0209 18:54:40.330793 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:40.330900 kubelet[2196]: I0209 18:54:40.330825 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:40.331118 kubelet[2196]: I0209 18:54:40.330852 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/852240fdea7dff95dc79143405b96279-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-123\" (UID: \"852240fdea7dff95dc79143405b96279\") " pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:40.331118 kubelet[2196]: I0209 18:54:40.330887 2196 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/852240fdea7dff95dc79143405b96279-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-123\" (UID: \"852240fdea7dff95dc79143405b96279\") " pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:40.331476 kubelet[2196]: E0209 18:54:40.331445 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-123?timeout=10s\": dial tcp 172.31.24.123:6443: connect: connection refused" interval="400ms" Feb 9 18:54:40.435602 kubelet[2196]: I0209 18:54:40.434442 2196 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-123" Feb 9 18:54:40.435937 kubelet[2196]: E0209 18:54:40.435913 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.123:6443/api/v1/nodes\": dial tcp 172.31.24.123:6443: connect: connection refused" node="ip-172-31-24-123" Feb 9 18:54:40.598318 env[1558]: time="2024-02-09T18:54:40.598274851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-123,Uid:852240fdea7dff95dc79143405b96279,Namespace:kube-system,Attempt:0,}" Feb 9 18:54:40.610025 env[1558]: time="2024-02-09T18:54:40.609941591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-123,Uid:827d239473ebbfd39d144bee5531520f,Namespace:kube-system,Attempt:0,}" Feb 9 18:54:40.611152 env[1558]: time="2024-02-09T18:54:40.611116059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-123,Uid:b3bacd205a2431a0a31647fe92c712a7,Namespace:kube-system,Attempt:0,}" Feb 9 18:54:40.731937 kubelet[2196]: E0209 18:54:40.731903 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-123?timeout=10s\": dial tcp 172.31.24.123:6443: connect: connection refused" interval="800ms" Feb 9 18:54:40.838196 kubelet[2196]: I0209 18:54:40.838160 2196 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-123" Feb 9 18:54:40.838734 kubelet[2196]: E0209 18:54:40.838683 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.123:6443/api/v1/nodes\": dial tcp 172.31.24.123:6443: connect: connection refused" node="ip-172-31-24-123" Feb 9 18:54:41.067517 kubelet[2196]: W0209 18:54:41.067222 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.24.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.067517 kubelet[2196]: E0209 18:54:41.067417 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.123:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.120037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523519280.mount: Deactivated successfully. Feb 9 18:54:41.132007 env[1558]: time="2024-02-09T18:54:41.131961334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.136971 env[1558]: time="2024-02-09T18:54:41.136927005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.139229 env[1558]: time="2024-02-09T18:54:41.139101735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.140325 env[1558]: time="2024-02-09T18:54:41.140285747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.144098 env[1558]: time="2024-02-09T18:54:41.144059545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.145570 env[1558]: time="2024-02-09T18:54:41.145539032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.147691 env[1558]: time="2024-02-09T18:54:41.147657807Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.150614 env[1558]: time="2024-02-09T18:54:41.150583879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.153241 env[1558]: time="2024-02-09T18:54:41.153137123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.154734 env[1558]: time="2024-02-09T18:54:41.154674768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.156547 env[1558]: time="2024-02-09T18:54:41.156519001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.163225 env[1558]: time="2024-02-09T18:54:41.163167977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:54:41.237120 env[1558]: time="2024-02-09T18:54:41.236895812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:54:41.237408 env[1558]: time="2024-02-09T18:54:41.237013224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:54:41.237552 env[1558]: time="2024-02-09T18:54:41.237398905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:54:41.237958 env[1558]: time="2024-02-09T18:54:41.237903329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee pid=2242 runtime=io.containerd.runc.v2 Feb 9 18:54:41.242656 env[1558]: time="2024-02-09T18:54:41.242549553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:54:41.242656 env[1558]: time="2024-02-09T18:54:41.242589729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:54:41.242656 env[1558]: time="2024-02-09T18:54:41.242605451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:54:41.243056 env[1558]: time="2024-02-09T18:54:41.243005208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab3e16c567279c78b351241a0205493ce7bb714a21932aeb7484cccc06e2267d pid=2250 runtime=io.containerd.runc.v2 Feb 9 18:54:41.256314 env[1558]: time="2024-02-09T18:54:41.256188719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:54:41.256314 env[1558]: time="2024-02-09T18:54:41.256257179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:54:41.256314 env[1558]: time="2024-02-09T18:54:41.256273211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:54:41.257186 env[1558]: time="2024-02-09T18:54:41.257065753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f pid=2261 runtime=io.containerd.runc.v2 Feb 9 18:54:41.270493 systemd[1]: Started cri-containerd-57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee.scope. Feb 9 18:54:41.318294 systemd[1]: Started cri-containerd-ab3e16c567279c78b351241a0205493ce7bb714a21932aeb7484cccc06e2267d.scope. Feb 9 18:54:41.328206 systemd[1]: Started cri-containerd-901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f.scope. Feb 9 18:54:41.420174 env[1558]: time="2024-02-09T18:54:41.420129148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-123,Uid:b3bacd205a2431a0a31647fe92c712a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee\"" Feb 9 18:54:41.425946 env[1558]: time="2024-02-09T18:54:41.425456386Z" level=info msg="CreateContainer within sandbox \"57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:54:41.439586 kubelet[2196]: W0209 18:54:41.439521 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.24.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-123&limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.439586 kubelet[2196]: E0209 18:54:41.439598 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-123&limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.444321 env[1558]: time="2024-02-09T18:54:41.444270248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-123,Uid:852240fdea7dff95dc79143405b96279,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab3e16c567279c78b351241a0205493ce7bb714a21932aeb7484cccc06e2267d\"" Feb 9 18:54:41.451039 env[1558]: time="2024-02-09T18:54:41.450998619Z" level=info msg="CreateContainer within sandbox \"ab3e16c567279c78b351241a0205493ce7bb714a21932aeb7484cccc06e2267d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:54:41.457726 env[1558]: time="2024-02-09T18:54:41.457693753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-123,Uid:827d239473ebbfd39d144bee5531520f,Namespace:kube-system,Attempt:0,} returns sandbox id \"901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f\"" Feb 9 18:54:41.462388 env[1558]: time="2024-02-09T18:54:41.462358364Z" level=info msg="CreateContainer within sandbox \"901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:54:41.509530 kubelet[2196]: W0209 18:54:41.509412 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.24.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.509530 kubelet[2196]: E0209 18:54:41.509502 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.526813 kubelet[2196]: W0209 18:54:41.526656 2196 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.24.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.526813 kubelet[2196]: E0209 18:54:41.526743 2196 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:41.532557 kubelet[2196]: E0209 18:54:41.532528 2196 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-123?timeout=10s\": dial tcp 172.31.24.123:6443: connect: connection refused" interval="1.6s" Feb 9 18:54:41.547095 env[1558]: time="2024-02-09T18:54:41.547029914Z" level=info msg="CreateContainer within sandbox \"57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa\"" Feb 9 18:54:41.547968 env[1558]: time="2024-02-09T18:54:41.547938129Z" level=info msg="StartContainer for \"8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa\"" Feb 9 18:54:41.563137 env[1558]: time="2024-02-09T18:54:41.563085917Z" level=info msg="CreateContainer within sandbox \"ab3e16c567279c78b351241a0205493ce7bb714a21932aeb7484cccc06e2267d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"85a9d9ff3e2a7d16c4bb1c9e5f1a2a35b7efdb5d31279016b09c1f8d4c58ea0b\"" Feb 9 18:54:41.564212 env[1558]: time="2024-02-09T18:54:41.564176918Z" level=info msg="StartContainer for \"85a9d9ff3e2a7d16c4bb1c9e5f1a2a35b7efdb5d31279016b09c1f8d4c58ea0b\"" Feb 9 18:54:41.574238 systemd[1]: Started cri-containerd-8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa.scope. Feb 9 18:54:41.579677 env[1558]: time="2024-02-09T18:54:41.579643311Z" level=info msg="CreateContainer within sandbox \"901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4\"" Feb 9 18:54:41.580510 env[1558]: time="2024-02-09T18:54:41.580462269Z" level=info msg="StartContainer for \"83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4\"" Feb 9 18:54:41.628242 systemd[1]: Started cri-containerd-85a9d9ff3e2a7d16c4bb1c9e5f1a2a35b7efdb5d31279016b09c1f8d4c58ea0b.scope. Feb 9 18:54:41.639370 systemd[1]: Started cri-containerd-83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4.scope. Feb 9 18:54:41.651640 kubelet[2196]: I0209 18:54:41.651147 2196 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-123" Feb 9 18:54:41.651640 kubelet[2196]: E0209 18:54:41.651573 2196 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.24.123:6443/api/v1/nodes\": dial tcp 172.31.24.123:6443: connect: connection refused" node="ip-172-31-24-123" Feb 9 18:54:41.701587 env[1558]: time="2024-02-09T18:54:41.701540452Z" level=info msg="StartContainer for \"8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa\" returns successfully" Feb 9 18:54:41.751460 env[1558]: time="2024-02-09T18:54:41.751410299Z" level=info msg="StartContainer for \"85a9d9ff3e2a7d16c4bb1c9e5f1a2a35b7efdb5d31279016b09c1f8d4c58ea0b\" returns successfully" Feb 9 18:54:41.772814 env[1558]: time="2024-02-09T18:54:41.772766975Z" level=info msg="StartContainer for \"83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4\" returns successfully" Feb 9 18:54:42.238223 kubelet[2196]: E0209 18:54:42.238193 2196 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.123:6443: connect: connection refused Feb 9 18:54:43.253580 kubelet[2196]: I0209 18:54:43.253558 2196 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-123" Feb 9 18:54:44.759514 kubelet[2196]: E0209 18:54:44.759445 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-123\" not found" node="ip-172-31-24-123" Feb 9 18:54:44.842187 kubelet[2196]: I0209 18:54:44.842141 2196 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-24-123" Feb 9 18:54:44.884585 kubelet[2196]: E0209 18:54:44.884446 2196 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-24-123.17b246a447a10156", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-24-123", UID:"ip-172-31-24-123", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-24-123"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 54, 40, 117154134, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 54, 40, 117154134, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:54:45.120142 kubelet[2196]: I0209 18:54:45.119952 2196 apiserver.go:52] "Watching apiserver" Feb 9 18:54:45.130015 kubelet[2196]: I0209 18:54:45.129972 2196 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 18:54:45.170655 kubelet[2196]: I0209 18:54:45.170608 2196 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:54:47.504807 systemd[1]: Reloading. Feb 9 18:54:47.624368 /usr/lib/systemd/system-generators/torcx-generator[2488]: time="2024-02-09T18:54:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:54:47.626805 /usr/lib/systemd/system-generators/torcx-generator[2488]: time="2024-02-09T18:54:47Z" level=info msg="torcx already run" Feb 9 18:54:47.749218 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:54:47.749241 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:54:47.780192 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:54:47.939854 kubelet[2196]: I0209 18:54:47.939824 2196 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:54:47.940340 systemd[1]: Stopping kubelet.service... Feb 9 18:54:47.958393 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:54:47.958829 systemd[1]: Stopped kubelet.service. Feb 9 18:54:47.961571 systemd[1]: Started kubelet.service. Feb 9 18:54:48.082449 kubelet[2540]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:54:48.082860 kubelet[2540]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 18:54:48.082978 kubelet[2540]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:54:48.083319 kubelet[2540]: I0209 18:54:48.083275 2540 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:54:48.103041 kubelet[2540]: I0209 18:54:48.103005 2540 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 18:54:48.103041 kubelet[2540]: I0209 18:54:48.103033 2540 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:54:48.104874 kubelet[2540]: I0209 18:54:48.103550 2540 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 18:54:48.108585 kubelet[2540]: I0209 18:54:48.108182 2540 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:54:48.110683 sudo[2550]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:54:48.111062 sudo[2550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:54:48.118190 kubelet[2540]: I0209 18:54:48.116463 2540 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:54:48.118190 kubelet[2540]: I0209 18:54:48.116953 2540 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:54:48.118190 kubelet[2540]: I0209 18:54:48.117040 2540 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:54:48.118190 kubelet[2540]: I0209 18:54:48.117133 2540 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:54:48.118190 kubelet[2540]: I0209 18:54:48.117153 2540 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 18:54:48.118190 kubelet[2540]: I0209 18:54:48.117193 2540 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:54:48.120006 kubelet[2540]: I0209 18:54:48.119980 2540 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:54:48.128648 kubelet[2540]: I0209 18:54:48.127442 2540 kubelet.go:405] "Attempting to sync node with API server" Feb 9 18:54:48.128648 kubelet[2540]: I0209 18:54:48.127477 2540 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:54:48.128648 kubelet[2540]: I0209 18:54:48.127992 2540 kubelet.go:309] "Adding apiserver pod source" Feb 9 18:54:48.128648 kubelet[2540]: I0209 18:54:48.128500 2540 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:54:48.136898 kubelet[2540]: I0209 18:54:48.136871 2540 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:54:48.138800 kubelet[2540]: I0209 18:54:48.138777 2540 server.go:1168] "Started kubelet" Feb 9 18:54:48.168927 kubelet[2540]: I0209 18:54:48.168892 2540 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:54:48.195194 kubelet[2540]: I0209 18:54:48.195155 2540 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:54:48.198905 kubelet[2540]: I0209 18:54:48.198874 2540 server.go:461] "Adding debug handlers to kubelet server" Feb 9 18:54:48.211007 kubelet[2540]: I0209 18:54:48.210979 2540 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 18:54:48.229996 kubelet[2540]: I0209 18:54:48.229958 2540 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 18:54:48.236602 kubelet[2540]: E0209 18:54:48.236570 2540 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:54:48.236602 kubelet[2540]: E0209 18:54:48.236610 2540 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:54:48.236827 kubelet[2540]: I0209 18:54:48.236712 2540 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 18:54:48.279449 kubelet[2540]: I0209 18:54:48.279338 2540 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:54:48.281015 kubelet[2540]: I0209 18:54:48.280990 2540 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:54:48.281139 kubelet[2540]: I0209 18:54:48.281023 2540 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 18:54:48.281139 kubelet[2540]: I0209 18:54:48.281044 2540 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 18:54:48.281139 kubelet[2540]: E0209 18:54:48.281099 2540 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:54:48.336087 kubelet[2540]: I0209 18:54:48.336004 2540 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-24-123" Feb 9 18:54:48.348659 kubelet[2540]: I0209 18:54:48.348626 2540 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-24-123" Feb 9 18:54:48.348815 kubelet[2540]: I0209 18:54:48.348723 2540 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-24-123" Feb 9 18:54:48.382329 kubelet[2540]: E0209 18:54:48.382251 2540 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 18:54:48.399849 kubelet[2540]: I0209 18:54:48.399812 2540 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:54:48.399849 kubelet[2540]: I0209 18:54:48.399848 2540 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:54:48.400061 kubelet[2540]: I0209 18:54:48.399868 2540 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:54:48.400111 kubelet[2540]: I0209 18:54:48.400067 2540 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:54:48.400111 kubelet[2540]: I0209 18:54:48.400085 2540 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:54:48.400111 kubelet[2540]: I0209 18:54:48.400095 2540 policy_none.go:49] "None policy: Start" Feb 9 18:54:48.403625 kubelet[2540]: I0209 18:54:48.403597 2540 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:54:48.404117 kubelet[2540]: I0209 18:54:48.403638 2540 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:54:48.404889 kubelet[2540]: I0209 18:54:48.404869 2540 state_mem.go:75] "Updated machine memory state" Feb 9 18:54:48.412886 kubelet[2540]: I0209 18:54:48.412853 2540 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:54:48.414739 kubelet[2540]: I0209 18:54:48.414429 2540 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:54:48.582931 kubelet[2540]: I0209 18:54:48.582898 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:54:48.583207 kubelet[2540]: I0209 18:54:48.583190 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:54:48.583633 kubelet[2540]: I0209 18:54:48.583614 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:54:48.597477 kubelet[2540]: E0209 18:54:48.597385 2540 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-123\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:48.597644 kubelet[2540]: E0209 18:54:48.597530 2540 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-24-123\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-123" Feb 9 18:54:48.646769 kubelet[2540]: I0209 18:54:48.646734 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/852240fdea7dff95dc79143405b96279-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-123\" (UID: \"852240fdea7dff95dc79143405b96279\") " pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:48.647102 kubelet[2540]: I0209 18:54:48.647070 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:48.647195 kubelet[2540]: I0209 18:54:48.647136 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:48.647195 kubelet[2540]: I0209 18:54:48.647184 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3bacd205a2431a0a31647fe92c712a7-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-123\" (UID: \"b3bacd205a2431a0a31647fe92c712a7\") " pod="kube-system/kube-scheduler-ip-172-31-24-123" Feb 9 18:54:48.647296 kubelet[2540]: I0209 18:54:48.647220 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/852240fdea7dff95dc79143405b96279-ca-certs\") pod \"kube-apiserver-ip-172-31-24-123\" (UID: \"852240fdea7dff95dc79143405b96279\") " pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:48.647296 kubelet[2540]: I0209 18:54:48.647255 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/852240fdea7dff95dc79143405b96279-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-123\" (UID: \"852240fdea7dff95dc79143405b96279\") " pod="kube-system/kube-apiserver-ip-172-31-24-123" Feb 9 18:54:48.647386 kubelet[2540]: I0209 18:54:48.647308 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:48.647386 kubelet[2540]: I0209 18:54:48.647342 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:48.647386 kubelet[2540]: I0209 18:54:48.647378 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/827d239473ebbfd39d144bee5531520f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-123\" (UID: \"827d239473ebbfd39d144bee5531520f\") " pod="kube-system/kube-controller-manager-ip-172-31-24-123" Feb 9 18:54:48.948160 sudo[2550]: pam_unix(sudo:session): session closed for user root Feb 9 18:54:49.136607 kubelet[2540]: I0209 18:54:49.136562 2540 apiserver.go:52] "Watching apiserver" Feb 9 18:54:49.237854 kubelet[2540]: I0209 18:54:49.237809 2540 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 18:54:49.249547 kubelet[2540]: I0209 18:54:49.249516 2540 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:54:49.283068 kubelet[2540]: I0209 18:54:49.283029 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-123" podStartSLOduration=4.28259229 podCreationTimestamp="2024-02-09 18:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:54:49.270421445 +0000 UTC m=+1.304150787" watchObservedRunningTime="2024-02-09 18:54:49.28259229 +0000 UTC m=+1.316321626" Feb 9 18:54:49.297726 kubelet[2540]: I0209 18:54:49.297688 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-123" podStartSLOduration=2.297638727 podCreationTimestamp="2024-02-09 18:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:54:49.283146496 +0000 UTC m=+1.316875822" watchObservedRunningTime="2024-02-09 18:54:49.297638727 +0000 UTC m=+1.331368059" Feb 9 18:54:49.297901 kubelet[2540]: I0209 18:54:49.297811 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-123" podStartSLOduration=1.29778499 podCreationTimestamp="2024-02-09 18:54:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:54:49.295026694 +0000 UTC m=+1.328756035" watchObservedRunningTime="2024-02-09 18:54:49.29778499 +0000 UTC m=+1.331514331" Feb 9 18:54:50.904136 sudo[1792]: pam_unix(sudo:session): session closed for user root Feb 9 18:54:50.927251 sshd[1789]: pam_unix(sshd:session): session closed for user core Feb 9 18:54:50.930798 systemd[1]: sshd@4-172.31.24.123:22-139.178.68.195:54706.service: Deactivated successfully. Feb 9 18:54:50.931806 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:54:50.932020 systemd[1]: session-5.scope: Consumed 4.480s CPU time. Feb 9 18:54:50.932677 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:54:50.933961 systemd-logind[1550]: Removed session 5. Feb 9 18:54:51.072067 update_engine[1551]: I0209 18:54:51.072005 1551 update_attempter.cc:509] Updating boot flags... Feb 9 18:55:00.894152 kubelet[2540]: I0209 18:55:00.894121 2540 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:55:00.894634 env[1558]: time="2024-02-09T18:55:00.894599227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:55:00.895003 kubelet[2540]: I0209 18:55:00.894879 2540 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:55:01.386032 amazon-ssm-agent[1600]: 2024-02-09 18:55:01 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 18:55:01.496704 kubelet[2540]: I0209 18:55:01.496567 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:55:01.530219 systemd[1]: Created slice kubepods-besteffort-pod204e07aa_cae9_470a_abeb_3db38bd18605.slice. Feb 9 18:55:01.536379 kubelet[2540]: I0209 18:55:01.536253 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/204e07aa-cae9-470a-abeb-3db38bd18605-xtables-lock\") pod \"kube-proxy-t767p\" (UID: \"204e07aa-cae9-470a-abeb-3db38bd18605\") " pod="kube-system/kube-proxy-t767p" Feb 9 18:55:01.536379 kubelet[2540]: I0209 18:55:01.536365 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl68z\" (UniqueName: \"kubernetes.io/projected/204e07aa-cae9-470a-abeb-3db38bd18605-kube-api-access-tl68z\") pod \"kube-proxy-t767p\" (UID: \"204e07aa-cae9-470a-abeb-3db38bd18605\") " pod="kube-system/kube-proxy-t767p" Feb 9 18:55:01.537074 kubelet[2540]: I0209 18:55:01.536460 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/204e07aa-cae9-470a-abeb-3db38bd18605-kube-proxy\") pod \"kube-proxy-t767p\" (UID: \"204e07aa-cae9-470a-abeb-3db38bd18605\") " pod="kube-system/kube-proxy-t767p" Feb 9 18:55:01.537074 kubelet[2540]: I0209 18:55:01.536541 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/204e07aa-cae9-470a-abeb-3db38bd18605-lib-modules\") pod \"kube-proxy-t767p\" (UID: \"204e07aa-cae9-470a-abeb-3db38bd18605\") " pod="kube-system/kube-proxy-t767p" Feb 9 18:55:01.547941 kubelet[2540]: I0209 18:55:01.547783 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:55:01.558049 systemd[1]: Created slice kubepods-burstable-podcff362a0_3168_4df3_a0c2_53439f456212.slice. Feb 9 18:55:01.565665 kubelet[2540]: W0209 18:55:01.565633 2540 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:55:01.565913 kubelet[2540]: E0209 18:55:01.565898 2540 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:55:01.566003 kubelet[2540]: W0209 18:55:01.565849 2540 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:55:01.566262 kubelet[2540]: E0209 18:55:01.566249 2540 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:55:01.638358 kubelet[2540]: I0209 18:55:01.637561 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-cgroup\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.638718 kubelet[2540]: I0209 18:55:01.638703 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-run\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.638963 kubelet[2540]: I0209 18:55:01.638949 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cni-path\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.639076 kubelet[2540]: I0209 18:55:01.639068 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdkd\" (UniqueName: \"kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-kube-api-access-wxdkd\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.639171 kubelet[2540]: I0209 18:55:01.639164 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-lib-modules\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.639279 kubelet[2540]: I0209 18:55:01.639272 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-etc-cni-netd\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.640019 kubelet[2540]: I0209 18:55:01.640001 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-xtables-lock\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.640127 kubelet[2540]: I0209 18:55:01.640119 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-hubble-tls\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.640221 kubelet[2540]: I0209 18:55:01.640214 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-bpf-maps\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.640302 kubelet[2540]: I0209 18:55:01.640295 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-hostproc\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.640390 kubelet[2540]: I0209 18:55:01.640383 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-kernel\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.640891 kubelet[2540]: I0209 18:55:01.640877 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cff362a0-3168-4df3-a0c2-53439f456212-clustermesh-secrets\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.641131 kubelet[2540]: I0209 18:55:01.641118 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff362a0-3168-4df3-a0c2-53439f456212-cilium-config-path\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.641261 kubelet[2540]: I0209 18:55:01.641250 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-net\") pod \"cilium-zjwmh\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " pod="kube-system/cilium-zjwmh" Feb 9 18:55:01.653806 kubelet[2540]: E0209 18:55:01.653668 2540 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 9 18:55:01.653989 kubelet[2540]: E0209 18:55:01.653978 2540 projected.go:198] Error preparing data for projected volume kube-api-access-tl68z for pod kube-system/kube-proxy-t767p: configmap "kube-root-ca.crt" not found Feb 9 18:55:01.654205 kubelet[2540]: E0209 18:55:01.654190 2540 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/204e07aa-cae9-470a-abeb-3db38bd18605-kube-api-access-tl68z podName:204e07aa-cae9-470a-abeb-3db38bd18605 nodeName:}" failed. No retries permitted until 2024-02-09 18:55:02.15413176 +0000 UTC m=+14.187861081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tl68z" (UniqueName: "kubernetes.io/projected/204e07aa-cae9-470a-abeb-3db38bd18605-kube-api-access-tl68z") pod "kube-proxy-t767p" (UID: "204e07aa-cae9-470a-abeb-3db38bd18605") : configmap "kube-root-ca.crt" not found Feb 9 18:55:01.880057 kubelet[2540]: I0209 18:55:01.880017 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:55:01.887940 systemd[1]: Created slice kubepods-besteffort-pod0e736cb8_9fc9_49c0_a3fb_387f22d06ade.slice. Feb 9 18:55:01.943283 kubelet[2540]: I0209 18:55:01.943184 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-cilium-config-path\") pod \"cilium-operator-574c4bb98d-z7bng\" (UID: \"0e736cb8-9fc9-49c0-a3fb-387f22d06ade\") " pod="kube-system/cilium-operator-574c4bb98d-z7bng" Feb 9 18:55:01.943985 kubelet[2540]: I0209 18:55:01.943968 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzwlh\" (UniqueName: \"kubernetes.io/projected/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-kube-api-access-wzwlh\") pod \"cilium-operator-574c4bb98d-z7bng\" (UID: \"0e736cb8-9fc9-49c0-a3fb-387f22d06ade\") " pod="kube-system/cilium-operator-574c4bb98d-z7bng" Feb 9 18:55:02.200320 env[1558]: time="2024-02-09T18:55:02.200112492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-z7bng,Uid:0e736cb8-9fc9-49c0-a3fb-387f22d06ade,Namespace:kube-system,Attempt:0,}" Feb 9 18:55:02.282978 env[1558]: time="2024-02-09T18:55:02.282826370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:55:02.283380 env[1558]: time="2024-02-09T18:55:02.283042430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:55:02.283380 env[1558]: time="2024-02-09T18:55:02.283233856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:55:02.285517 env[1558]: time="2024-02-09T18:55:02.285295863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1 pid=2889 runtime=io.containerd.runc.v2 Feb 9 18:55:02.321623 systemd[1]: Started cri-containerd-2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1.scope. Feb 9 18:55:02.411403 env[1558]: time="2024-02-09T18:55:02.411364889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-z7bng,Uid:0e736cb8-9fc9-49c0-a3fb-387f22d06ade,Namespace:kube-system,Attempt:0,} returns sandbox id \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\"" Feb 9 18:55:02.420677 env[1558]: time="2024-02-09T18:55:02.420643073Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:55:02.452608 env[1558]: time="2024-02-09T18:55:02.452465457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t767p,Uid:204e07aa-cae9-470a-abeb-3db38bd18605,Namespace:kube-system,Attempt:0,}" Feb 9 18:55:02.508958 env[1558]: time="2024-02-09T18:55:02.508871264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:55:02.508958 env[1558]: time="2024-02-09T18:55:02.508914866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:55:02.509265 env[1558]: time="2024-02-09T18:55:02.508931137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:55:02.509265 env[1558]: time="2024-02-09T18:55:02.509102573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46067d6828657511091ece0ecb45f8f7602be2bdce9fcd1013c75f0ef8eccfe1 pid=2931 runtime=io.containerd.runc.v2 Feb 9 18:55:02.529434 systemd[1]: Started cri-containerd-46067d6828657511091ece0ecb45f8f7602be2bdce9fcd1013c75f0ef8eccfe1.scope. Feb 9 18:55:02.600320 env[1558]: time="2024-02-09T18:55:02.600269592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t767p,Uid:204e07aa-cae9-470a-abeb-3db38bd18605,Namespace:kube-system,Attempt:0,} returns sandbox id \"46067d6828657511091ece0ecb45f8f7602be2bdce9fcd1013c75f0ef8eccfe1\"" Feb 9 18:55:02.607888 env[1558]: time="2024-02-09T18:55:02.607840263Z" level=info msg="CreateContainer within sandbox \"46067d6828657511091ece0ecb45f8f7602be2bdce9fcd1013c75f0ef8eccfe1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:55:02.640110 env[1558]: time="2024-02-09T18:55:02.638551740Z" level=info msg="CreateContainer within sandbox \"46067d6828657511091ece0ecb45f8f7602be2bdce9fcd1013c75f0ef8eccfe1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a1cbfd46142367feeee3a381df0c181980f96c05ed79d744c3101cfc96e42fdc\"" Feb 9 18:55:02.646613 env[1558]: time="2024-02-09T18:55:02.646569065Z" level=info msg="StartContainer for \"a1cbfd46142367feeee3a381df0c181980f96c05ed79d744c3101cfc96e42fdc\"" Feb 9 18:55:02.699554 systemd[1]: Started cri-containerd-a1cbfd46142367feeee3a381df0c181980f96c05ed79d744c3101cfc96e42fdc.scope. Feb 9 18:55:02.781352 env[1558]: time="2024-02-09T18:55:02.781176440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zjwmh,Uid:cff362a0-3168-4df3-a0c2-53439f456212,Namespace:kube-system,Attempt:0,}" Feb 9 18:55:02.834000 env[1558]: time="2024-02-09T18:55:02.833956637Z" level=info msg="StartContainer for \"a1cbfd46142367feeee3a381df0c181980f96c05ed79d744c3101cfc96e42fdc\" returns successfully" Feb 9 18:55:02.844402 env[1558]: time="2024-02-09T18:55:02.843878922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:55:02.844402 env[1558]: time="2024-02-09T18:55:02.843934608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:55:02.844402 env[1558]: time="2024-02-09T18:55:02.843953378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:55:02.844402 env[1558]: time="2024-02-09T18:55:02.844201553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015 pid=3005 runtime=io.containerd.runc.v2 Feb 9 18:55:02.881678 systemd[1]: Started cri-containerd-0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015.scope. Feb 9 18:55:02.944585 env[1558]: time="2024-02-09T18:55:02.944532945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zjwmh,Uid:cff362a0-3168-4df3-a0c2-53439f456212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\"" Feb 9 18:55:03.767716 systemd[1]: run-containerd-runc-k8s.io-0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015-runc.O11lrH.mount: Deactivated successfully. Feb 9 18:55:04.125368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080119813.mount: Deactivated successfully. Feb 9 18:55:06.022030 env[1558]: time="2024-02-09T18:55:06.021976837Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:55:06.026466 env[1558]: time="2024-02-09T18:55:06.026420206Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:55:06.029179 env[1558]: time="2024-02-09T18:55:06.029136865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:55:06.029748 env[1558]: time="2024-02-09T18:55:06.029712819Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 18:55:06.031321 env[1558]: time="2024-02-09T18:55:06.031285366Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:55:06.036786 env[1558]: time="2024-02-09T18:55:06.036741929Z" level=info msg="CreateContainer within sandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:55:06.077456 env[1558]: time="2024-02-09T18:55:06.077254285Z" level=info msg="CreateContainer within sandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\"" Feb 9 18:55:06.078586 env[1558]: time="2024-02-09T18:55:06.078537458Z" level=info msg="StartContainer for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\"" Feb 9 18:55:06.121560 systemd[1]: Started cri-containerd-59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318.scope. Feb 9 18:55:06.162933 env[1558]: time="2024-02-09T18:55:06.162884266Z" level=info msg="StartContainer for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" returns successfully" Feb 9 18:55:06.410692 kubelet[2540]: I0209 18:55:06.410573 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-t767p" podStartSLOduration=5.410502784 podCreationTimestamp="2024-02-09 18:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:55:03.364269698 +0000 UTC m=+15.397999040" watchObservedRunningTime="2024-02-09 18:55:06.410502784 +0000 UTC m=+18.444232126" Feb 9 18:55:08.311513 kubelet[2540]: I0209 18:55:08.311335 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-z7bng" podStartSLOduration=3.69427562 podCreationTimestamp="2024-02-09 18:55:01 +0000 UTC" firstStartedPulling="2024-02-09 18:55:02.413253806 +0000 UTC m=+14.446983137" lastFinishedPulling="2024-02-09 18:55:06.030267545 +0000 UTC m=+18.063996881" observedRunningTime="2024-02-09 18:55:06.47431915 +0000 UTC m=+18.508048497" watchObservedRunningTime="2024-02-09 18:55:08.311289364 +0000 UTC m=+20.345018761" Feb 9 18:55:13.278408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278317512.mount: Deactivated successfully. Feb 9 18:55:17.147377 env[1558]: time="2024-02-09T18:55:17.147226378Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:55:17.154725 env[1558]: time="2024-02-09T18:55:17.154677048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:55:17.158324 env[1558]: time="2024-02-09T18:55:17.158259068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:55:17.159099 env[1558]: time="2024-02-09T18:55:17.159061493Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 18:55:17.163212 env[1558]: time="2024-02-09T18:55:17.163078741Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:55:17.181491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795002083.mount: Deactivated successfully. Feb 9 18:55:17.184717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692371201.mount: Deactivated successfully. Feb 9 18:55:17.195839 env[1558]: time="2024-02-09T18:55:17.195785843Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\"" Feb 9 18:55:17.197920 env[1558]: time="2024-02-09T18:55:17.196444583Z" level=info msg="StartContainer for \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\"" Feb 9 18:55:17.224818 systemd[1]: Started cri-containerd-3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed.scope. Feb 9 18:55:17.282148 env[1558]: time="2024-02-09T18:55:17.282077020Z" level=info msg="StartContainer for \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\" returns successfully" Feb 9 18:55:17.291134 systemd[1]: cri-containerd-3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed.scope: Deactivated successfully. Feb 9 18:55:17.470609 env[1558]: time="2024-02-09T18:55:17.470454713Z" level=info msg="shim disconnected" id=3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed Feb 9 18:55:17.470609 env[1558]: time="2024-02-09T18:55:17.470536037Z" level=warning msg="cleaning up after shim disconnected" id=3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed namespace=k8s.io Feb 9 18:55:17.470609 env[1558]: time="2024-02-09T18:55:17.470549622Z" level=info msg="cleaning up dead shim" Feb 9 18:55:17.480296 env[1558]: time="2024-02-09T18:55:17.480249110Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:55:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3255 runtime=io.containerd.runc.v2\n" Feb 9 18:55:18.175336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed-rootfs.mount: Deactivated successfully. Feb 9 18:55:18.401298 env[1558]: time="2024-02-09T18:55:18.401208768Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:55:18.439604 env[1558]: time="2024-02-09T18:55:18.437927223Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\"" Feb 9 18:55:18.442230 env[1558]: time="2024-02-09T18:55:18.442184232Z" level=info msg="StartContainer for \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\"" Feb 9 18:55:18.482681 systemd[1]: Started cri-containerd-91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574.scope. Feb 9 18:55:18.542839 env[1558]: time="2024-02-09T18:55:18.542430107Z" level=info msg="StartContainer for \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\" returns successfully" Feb 9 18:55:18.559415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:55:18.560249 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:55:18.560790 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:55:18.564988 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:55:18.570569 systemd[1]: cri-containerd-91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574.scope: Deactivated successfully. Feb 9 18:55:18.617929 env[1558]: time="2024-02-09T18:55:18.617781826Z" level=info msg="shim disconnected" id=91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574 Feb 9 18:55:18.618384 env[1558]: time="2024-02-09T18:55:18.618347496Z" level=warning msg="cleaning up after shim disconnected" id=91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574 namespace=k8s.io Feb 9 18:55:18.618384 env[1558]: time="2024-02-09T18:55:18.618376871Z" level=info msg="cleaning up dead shim" Feb 9 18:55:18.622041 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:55:18.632286 env[1558]: time="2024-02-09T18:55:18.632235121Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3322 runtime=io.containerd.runc.v2\n" Feb 9 18:55:19.175527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574-rootfs.mount: Deactivated successfully. Feb 9 18:55:19.406291 env[1558]: time="2024-02-09T18:55:19.406244912Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:55:19.444928 env[1558]: time="2024-02-09T18:55:19.444409112Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\"" Feb 9 18:55:19.448187 env[1558]: time="2024-02-09T18:55:19.446680981Z" level=info msg="StartContainer for \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\"" Feb 9 18:55:19.480510 systemd[1]: Started cri-containerd-ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413.scope. Feb 9 18:55:19.534217 env[1558]: time="2024-02-09T18:55:19.534166569Z" level=info msg="StartContainer for \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\" returns successfully" Feb 9 18:55:19.539461 systemd[1]: cri-containerd-ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413.scope: Deactivated successfully. Feb 9 18:55:19.572252 env[1558]: time="2024-02-09T18:55:19.572197507Z" level=info msg="shim disconnected" id=ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413 Feb 9 18:55:19.572252 env[1558]: time="2024-02-09T18:55:19.572249462Z" level=warning msg="cleaning up after shim disconnected" id=ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413 namespace=k8s.io Feb 9 18:55:19.572802 env[1558]: time="2024-02-09T18:55:19.572262046Z" level=info msg="cleaning up dead shim" Feb 9 18:55:19.584029 env[1558]: time="2024-02-09T18:55:19.583985020Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:55:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3382 runtime=io.containerd.runc.v2\n" Feb 9 18:55:20.175149 systemd[1]: run-containerd-runc-k8s.io-ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413-runc.zet0Lm.mount: Deactivated successfully. Feb 9 18:55:20.175271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413-rootfs.mount: Deactivated successfully. Feb 9 18:55:20.407729 env[1558]: time="2024-02-09T18:55:20.407645576Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:55:20.443737 env[1558]: time="2024-02-09T18:55:20.443071800Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\"" Feb 9 18:55:20.453125 env[1558]: time="2024-02-09T18:55:20.452492980Z" level=info msg="StartContainer for \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\"" Feb 9 18:55:20.483850 systemd[1]: Started cri-containerd-5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176.scope. Feb 9 18:55:20.519286 systemd[1]: cri-containerd-5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176.scope: Deactivated successfully. Feb 9 18:55:20.521300 env[1558]: time="2024-02-09T18:55:20.521097248Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff362a0_3168_4df3_a0c2_53439f456212.slice/cri-containerd-5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176.scope/memory.events\": no such file or directory" Feb 9 18:55:20.524098 env[1558]: time="2024-02-09T18:55:20.524026546Z" level=info msg="StartContainer for \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\" returns successfully" Feb 9 18:55:20.556252 env[1558]: time="2024-02-09T18:55:20.556197428Z" level=info msg="shim disconnected" id=5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176 Feb 9 18:55:20.556252 env[1558]: time="2024-02-09T18:55:20.556250158Z" level=warning msg="cleaning up after shim disconnected" id=5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176 namespace=k8s.io Feb 9 18:55:20.556574 env[1558]: time="2024-02-09T18:55:20.556261525Z" level=info msg="cleaning up dead shim" Feb 9 18:55:20.566623 env[1558]: time="2024-02-09T18:55:20.566579534Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:55:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3438 runtime=io.containerd.runc.v2\n" Feb 9 18:55:21.175885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176-rootfs.mount: Deactivated successfully. Feb 9 18:55:21.418852 env[1558]: time="2024-02-09T18:55:21.416798743Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:55:21.452035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935932992.mount: Deactivated successfully. Feb 9 18:55:21.467119 env[1558]: time="2024-02-09T18:55:21.467074710Z" level=info msg="CreateContainer within sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\"" Feb 9 18:55:21.467991 env[1558]: time="2024-02-09T18:55:21.467962921Z" level=info msg="StartContainer for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\"" Feb 9 18:55:21.497430 systemd[1]: Started cri-containerd-8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16.scope. Feb 9 18:55:21.538863 env[1558]: time="2024-02-09T18:55:21.538784273Z" level=info msg="StartContainer for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" returns successfully" Feb 9 18:55:21.778773 kubelet[2540]: I0209 18:55:21.778742 2540 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:55:21.805651 kubelet[2540]: I0209 18:55:21.805559 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:55:21.808147 kubelet[2540]: I0209 18:55:21.808088 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:55:21.817342 systemd[1]: Created slice kubepods-burstable-podaf5236c7_8b01_45b8_a032_6623c700923e.slice. Feb 9 18:55:21.821814 systemd[1]: Created slice kubepods-burstable-podef4e6f54_dbfd_414a_964d_471f97a73d08.slice. Feb 9 18:55:21.850500 kubelet[2540]: I0209 18:55:21.850459 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef4e6f54-dbfd-414a-964d-471f97a73d08-config-volume\") pod \"coredns-5d78c9869d-zc824\" (UID: \"ef4e6f54-dbfd-414a-964d-471f97a73d08\") " pod="kube-system/coredns-5d78c9869d-zc824" Feb 9 18:55:21.850721 kubelet[2540]: I0209 18:55:21.850693 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzmd\" (UniqueName: \"kubernetes.io/projected/af5236c7-8b01-45b8-a032-6623c700923e-kube-api-access-whzmd\") pod \"coredns-5d78c9869d-bdhfb\" (UID: \"af5236c7-8b01-45b8-a032-6623c700923e\") " pod="kube-system/coredns-5d78c9869d-bdhfb" Feb 9 18:55:21.850856 kubelet[2540]: I0209 18:55:21.850744 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5236c7-8b01-45b8-a032-6623c700923e-config-volume\") pod \"coredns-5d78c9869d-bdhfb\" (UID: \"af5236c7-8b01-45b8-a032-6623c700923e\") " pod="kube-system/coredns-5d78c9869d-bdhfb" Feb 9 18:55:21.850856 kubelet[2540]: I0209 18:55:21.850778 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqdf\" (UniqueName: \"kubernetes.io/projected/ef4e6f54-dbfd-414a-964d-471f97a73d08-kube-api-access-sqqdf\") pod \"coredns-5d78c9869d-zc824\" (UID: \"ef4e6f54-dbfd-414a-964d-471f97a73d08\") " pod="kube-system/coredns-5d78c9869d-zc824" Feb 9 18:55:22.120806 env[1558]: time="2024-02-09T18:55:22.120686440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-bdhfb,Uid:af5236c7-8b01-45b8-a032-6623c700923e,Namespace:kube-system,Attempt:0,}" Feb 9 18:55:22.130782 env[1558]: time="2024-02-09T18:55:22.130733999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-zc824,Uid:ef4e6f54-dbfd-414a-964d-471f97a73d08,Namespace:kube-system,Attempt:0,}" Feb 9 18:55:24.090017 systemd-networkd[1374]: cilium_host: Link UP Feb 9 18:55:24.093093 (udev-worker)[3564]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:55:24.093377 (udev-worker)[3566]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:55:24.093786 systemd-networkd[1374]: cilium_net: Link UP Feb 9 18:55:24.093791 systemd-networkd[1374]: cilium_net: Gained carrier Feb 9 18:55:24.095740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:55:24.095403 systemd-networkd[1374]: cilium_host: Gained carrier Feb 9 18:55:24.294822 systemd-networkd[1374]: cilium_vxlan: Link UP Feb 9 18:55:24.294835 systemd-networkd[1374]: cilium_vxlan: Gained carrier Feb 9 18:55:24.876521 kernel: NET: Registered PF_ALG protocol family Feb 9 18:55:24.890052 systemd-networkd[1374]: cilium_net: Gained IPv6LL Feb 9 18:55:24.890556 systemd-networkd[1374]: cilium_host: Gained IPv6LL Feb 9 18:55:25.707382 (udev-worker)[3611]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:55:25.740823 systemd-networkd[1374]: lxc_health: Link UP Feb 9 18:55:25.761508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:55:25.761759 systemd-networkd[1374]: lxc_health: Gained carrier Feb 9 18:55:26.103665 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Feb 9 18:55:26.256899 systemd-networkd[1374]: lxcfb8a1f0ddfc3: Link UP Feb 9 18:55:26.272252 systemd-networkd[1374]: lxc163b54c1b7a9: Link UP Feb 9 18:55:26.279593 kernel: eth0: renamed from tmpb4048 Feb 9 18:55:26.289528 (udev-worker)[3612]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:55:26.292514 kernel: eth0: renamed from tmp510ea Feb 9 18:55:26.299652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfb8a1f0ddfc3: link becomes ready Feb 9 18:55:26.298853 systemd-networkd[1374]: lxcfb8a1f0ddfc3: Gained carrier Feb 9 18:55:26.304676 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc163b54c1b7a9: link becomes ready Feb 9 18:55:26.304981 systemd-networkd[1374]: lxc163b54c1b7a9: Gained carrier Feb 9 18:55:26.822298 kubelet[2540]: I0209 18:55:26.822256 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zjwmh" podStartSLOduration=11.603338658 podCreationTimestamp="2024-02-09 18:55:01 +0000 UTC" firstStartedPulling="2024-02-09 18:55:02.946441315 +0000 UTC m=+14.980170642" lastFinishedPulling="2024-02-09 18:55:17.159504435 +0000 UTC m=+29.193233761" observedRunningTime="2024-02-09 18:55:22.446664153 +0000 UTC m=+34.480393488" watchObservedRunningTime="2024-02-09 18:55:26.816401777 +0000 UTC m=+38.850131119" Feb 9 18:55:27.001080 systemd-networkd[1374]: lxc_health: Gained IPv6LL Feb 9 18:55:28.087738 systemd-networkd[1374]: lxcfb8a1f0ddfc3: Gained IPv6LL Feb 9 18:55:28.279752 systemd-networkd[1374]: lxc163b54c1b7a9: Gained IPv6LL Feb 9 18:55:32.086158 env[1558]: time="2024-02-09T18:55:32.086072325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:55:32.086158 env[1558]: time="2024-02-09T18:55:32.086131918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:55:32.087303 env[1558]: time="2024-02-09T18:55:32.087224071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:55:32.100345 env[1558]: time="2024-02-09T18:55:32.088121272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/510ea24340a37ddc245a7f0369dddf4cdb8f01ccfc39054f7b4c2feef393eab1 pid=3979 runtime=io.containerd.runc.v2 Feb 9 18:55:32.104963 env[1558]: time="2024-02-09T18:55:32.104838224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:55:32.105142 env[1558]: time="2024-02-09T18:55:32.104975259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:55:32.105142 env[1558]: time="2024-02-09T18:55:32.105006983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:55:32.105284 env[1558]: time="2024-02-09T18:55:32.105246195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4048e7606716c84e1956c4e167c85c6791469cb1dce26574fa6499e74a00aaa pid=3996 runtime=io.containerd.runc.v2 Feb 9 18:55:32.134467 systemd[1]: Started cri-containerd-510ea24340a37ddc245a7f0369dddf4cdb8f01ccfc39054f7b4c2feef393eab1.scope. Feb 9 18:55:32.171804 systemd[1]: Started cri-containerd-b4048e7606716c84e1956c4e167c85c6791469cb1dce26574fa6499e74a00aaa.scope. Feb 9 18:55:32.237646 env[1558]: time="2024-02-09T18:55:32.237593606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-zc824,Uid:ef4e6f54-dbfd-414a-964d-471f97a73d08,Namespace:kube-system,Attempt:0,} returns sandbox id \"510ea24340a37ddc245a7f0369dddf4cdb8f01ccfc39054f7b4c2feef393eab1\"" Feb 9 18:55:32.241138 env[1558]: time="2024-02-09T18:55:32.241086724Z" level=info msg="CreateContainer within sandbox \"510ea24340a37ddc245a7f0369dddf4cdb8f01ccfc39054f7b4c2feef393eab1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:55:32.270934 env[1558]: time="2024-02-09T18:55:32.270883146Z" level=info msg="CreateContainer within sandbox \"510ea24340a37ddc245a7f0369dddf4cdb8f01ccfc39054f7b4c2feef393eab1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d872a2996734a814a6b6830b87217e24f0c7cc9cf0eede6131395852e8e8bd9a\"" Feb 9 18:55:32.272037 env[1558]: time="2024-02-09T18:55:32.271993388Z" level=info msg="StartContainer for \"d872a2996734a814a6b6830b87217e24f0c7cc9cf0eede6131395852e8e8bd9a\"" Feb 9 18:55:32.301525 env[1558]: time="2024-02-09T18:55:32.301411983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-bdhfb,Uid:af5236c7-8b01-45b8-a032-6623c700923e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4048e7606716c84e1956c4e167c85c6791469cb1dce26574fa6499e74a00aaa\"" Feb 9 18:55:32.305358 env[1558]: time="2024-02-09T18:55:32.305316150Z" level=info msg="CreateContainer within sandbox \"b4048e7606716c84e1956c4e167c85c6791469cb1dce26574fa6499e74a00aaa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:55:32.332294 env[1558]: time="2024-02-09T18:55:32.332238064Z" level=info msg="CreateContainer within sandbox \"b4048e7606716c84e1956c4e167c85c6791469cb1dce26574fa6499e74a00aaa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53237c8cc1f56522558fe133a1a63ee7a42cfe7ee9d343c730d68e644b0eee11\"" Feb 9 18:55:32.333982 env[1558]: time="2024-02-09T18:55:32.333919043Z" level=info msg="StartContainer for \"53237c8cc1f56522558fe133a1a63ee7a42cfe7ee9d343c730d68e644b0eee11\"" Feb 9 18:55:32.354564 systemd[1]: Started cri-containerd-d872a2996734a814a6b6830b87217e24f0c7cc9cf0eede6131395852e8e8bd9a.scope. Feb 9 18:55:32.397604 systemd[1]: Started cri-containerd-53237c8cc1f56522558fe133a1a63ee7a42cfe7ee9d343c730d68e644b0eee11.scope. Feb 9 18:55:32.511444 env[1558]: time="2024-02-09T18:55:32.511355299Z" level=info msg="StartContainer for \"d872a2996734a814a6b6830b87217e24f0c7cc9cf0eede6131395852e8e8bd9a\" returns successfully" Feb 9 18:55:32.527414 env[1558]: time="2024-02-09T18:55:32.527358189Z" level=info msg="StartContainer for \"53237c8cc1f56522558fe133a1a63ee7a42cfe7ee9d343c730d68e644b0eee11\" returns successfully" Feb 9 18:55:33.097192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908354873.mount: Deactivated successfully. Feb 9 18:55:33.527414 kubelet[2540]: I0209 18:55:33.527378 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-zc824" podStartSLOduration=32.527318346 podCreationTimestamp="2024-02-09 18:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:55:33.525118066 +0000 UTC m=+45.558847407" watchObservedRunningTime="2024-02-09 18:55:33.527318346 +0000 UTC m=+45.561047686" Feb 9 18:55:33.538493 kubelet[2540]: I0209 18:55:33.538450 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-bdhfb" podStartSLOduration=32.538400362 podCreationTimestamp="2024-02-09 18:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:55:33.536750002 +0000 UTC m=+45.570479356" watchObservedRunningTime="2024-02-09 18:55:33.538400362 +0000 UTC m=+45.572129702" Feb 9 18:55:37.589369 systemd[1]: Started sshd@5-172.31.24.123:22-139.178.68.195:40672.service. Feb 9 18:55:37.786541 sshd[4138]: Accepted publickey for core from 139.178.68.195 port 40672 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:55:37.789287 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:55:37.796042 systemd-logind[1550]: New session 6 of user core. Feb 9 18:55:37.797154 systemd[1]: Started session-6.scope. Feb 9 18:55:38.039132 sshd[4138]: pam_unix(sshd:session): session closed for user core Feb 9 18:55:38.042586 systemd[1]: sshd@5-172.31.24.123:22-139.178.68.195:40672.service: Deactivated successfully. Feb 9 18:55:38.043706 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:55:38.044757 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:55:38.045728 systemd-logind[1550]: Removed session 6. Feb 9 18:55:43.068392 systemd[1]: Started sshd@6-172.31.24.123:22-139.178.68.195:40676.service. Feb 9 18:55:43.246562 sshd[4150]: Accepted publickey for core from 139.178.68.195 port 40676 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:55:43.245281 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:55:43.254594 systemd-logind[1550]: New session 7 of user core. Feb 9 18:55:43.255230 systemd[1]: Started session-7.scope. Feb 9 18:55:43.457562 sshd[4150]: pam_unix(sshd:session): session closed for user core Feb 9 18:55:43.461374 systemd[1]: sshd@6-172.31.24.123:22-139.178.68.195:40676.service: Deactivated successfully. Feb 9 18:55:43.462638 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:55:43.463971 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:55:43.465578 systemd-logind[1550]: Removed session 7. Feb 9 18:55:48.485687 systemd[1]: Started sshd@7-172.31.24.123:22-139.178.68.195:59344.service. Feb 9 18:55:48.660108 sshd[4165]: Accepted publickey for core from 139.178.68.195 port 59344 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:55:48.660858 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:55:48.667762 systemd-logind[1550]: New session 8 of user core. Feb 9 18:55:48.668447 systemd[1]: Started session-8.scope. Feb 9 18:55:48.873828 sshd[4165]: pam_unix(sshd:session): session closed for user core Feb 9 18:55:48.878916 systemd[1]: sshd@7-172.31.24.123:22-139.178.68.195:59344.service: Deactivated successfully. Feb 9 18:55:48.879795 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:55:48.880502 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:55:48.881461 systemd-logind[1550]: Removed session 8. Feb 9 18:55:53.902667 systemd[1]: Started sshd@8-172.31.24.123:22-139.178.68.195:59354.service. Feb 9 18:55:54.068874 sshd[4179]: Accepted publickey for core from 139.178.68.195 port 59354 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:55:54.070749 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:55:54.076976 systemd[1]: Started session-9.scope. Feb 9 18:55:54.078891 systemd-logind[1550]: New session 9 of user core. Feb 9 18:55:54.308312 sshd[4179]: pam_unix(sshd:session): session closed for user core Feb 9 18:55:54.311922 systemd[1]: sshd@8-172.31.24.123:22-139.178.68.195:59354.service: Deactivated successfully. Feb 9 18:55:54.312891 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:55:54.313830 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:55:54.314890 systemd-logind[1550]: Removed session 9. Feb 9 18:55:59.337017 systemd[1]: Started sshd@9-172.31.24.123:22-139.178.68.195:59904.service. Feb 9 18:55:59.507133 sshd[4194]: Accepted publickey for core from 139.178.68.195 port 59904 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:55:59.509228 sshd[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:55:59.517542 systemd[1]: Started session-10.scope. Feb 9 18:55:59.518068 systemd-logind[1550]: New session 10 of user core. Feb 9 18:55:59.729402 sshd[4194]: pam_unix(sshd:session): session closed for user core Feb 9 18:55:59.733193 systemd[1]: sshd@9-172.31.24.123:22-139.178.68.195:59904.service: Deactivated successfully. Feb 9 18:55:59.734694 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:55:59.735601 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:55:59.736718 systemd-logind[1550]: Removed session 10. Feb 9 18:55:59.757505 systemd[1]: Started sshd@10-172.31.24.123:22-139.178.68.195:59910.service. Feb 9 18:55:59.922476 sshd[4206]: Accepted publickey for core from 139.178.68.195 port 59910 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:55:59.924324 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:55:59.930730 systemd[1]: Started session-11.scope. Feb 9 18:55:59.931215 systemd-logind[1550]: New session 11 of user core. Feb 9 18:56:01.208292 sshd[4206]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:01.226606 systemd[1]: sshd@10-172.31.24.123:22-139.178.68.195:59910.service: Deactivated successfully. Feb 9 18:56:01.228336 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:56:01.229561 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:56:01.237221 systemd[1]: Started sshd@11-172.31.24.123:22-139.178.68.195:59920.service. Feb 9 18:56:01.238981 systemd-logind[1550]: Removed session 11. Feb 9 18:56:01.429157 sshd[4216]: Accepted publickey for core from 139.178.68.195 port 59920 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:01.431202 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:01.445041 systemd-logind[1550]: New session 12 of user core. Feb 9 18:56:01.446168 systemd[1]: Started session-12.scope. Feb 9 18:56:01.695621 sshd[4216]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:01.706326 systemd[1]: sshd@11-172.31.24.123:22-139.178.68.195:59920.service: Deactivated successfully. Feb 9 18:56:01.709934 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:56:01.714335 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:56:01.717723 systemd-logind[1550]: Removed session 12. Feb 9 18:56:06.724237 systemd[1]: Started sshd@12-172.31.24.123:22-139.178.68.195:56720.service. Feb 9 18:56:06.895354 sshd[4231]: Accepted publickey for core from 139.178.68.195 port 56720 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:06.896940 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:06.903256 systemd-logind[1550]: New session 13 of user core. Feb 9 18:56:06.903982 systemd[1]: Started session-13.scope. Feb 9 18:56:07.143421 sshd[4231]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:07.148285 systemd[1]: sshd@12-172.31.24.123:22-139.178.68.195:56720.service: Deactivated successfully. Feb 9 18:56:07.150051 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:56:07.152377 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:56:07.153390 systemd-logind[1550]: Removed session 13. Feb 9 18:56:12.178763 systemd[1]: Started sshd@13-172.31.24.123:22-139.178.68.195:56728.service. Feb 9 18:56:12.355336 sshd[4243]: Accepted publickey for core from 139.178.68.195 port 56728 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:12.359879 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:12.375471 systemd[1]: Started session-14.scope. Feb 9 18:56:12.378678 systemd-logind[1550]: New session 14 of user core. Feb 9 18:56:12.601731 sshd[4243]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:12.605590 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:56:12.605893 systemd[1]: sshd@13-172.31.24.123:22-139.178.68.195:56728.service: Deactivated successfully. Feb 9 18:56:12.607054 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:56:12.608541 systemd-logind[1550]: Removed session 14. Feb 9 18:56:16.925602 amazon-ssm-agent[1600]: 2024-02-09 18:56:16 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 18:56:17.634869 systemd[1]: Started sshd@14-172.31.24.123:22-139.178.68.195:50768.service. Feb 9 18:56:17.812853 sshd[4257]: Accepted publickey for core from 139.178.68.195 port 50768 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:17.814764 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:17.820345 systemd[1]: Started session-15.scope. Feb 9 18:56:17.821023 systemd-logind[1550]: New session 15 of user core. Feb 9 18:56:18.039670 sshd[4257]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:18.044303 systemd[1]: sshd@14-172.31.24.123:22-139.178.68.195:50768.service: Deactivated successfully. Feb 9 18:56:18.045304 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:56:18.046119 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:56:18.047536 systemd-logind[1550]: Removed session 15. Feb 9 18:56:18.067666 systemd[1]: Started sshd@15-172.31.24.123:22-139.178.68.195:50784.service. Feb 9 18:56:18.235516 sshd[4270]: Accepted publickey for core from 139.178.68.195 port 50784 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:18.237136 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:18.249149 systemd-logind[1550]: New session 16 of user core. Feb 9 18:56:18.249949 systemd[1]: Started session-16.scope. Feb 9 18:56:18.944543 sshd[4270]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:18.948744 systemd[1]: sshd@15-172.31.24.123:22-139.178.68.195:50784.service: Deactivated successfully. Feb 9 18:56:18.950410 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:56:18.951292 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:56:18.952226 systemd-logind[1550]: Removed session 16. Feb 9 18:56:18.972220 systemd[1]: Started sshd@16-172.31.24.123:22-139.178.68.195:50798.service. Feb 9 18:56:19.157882 sshd[4280]: Accepted publickey for core from 139.178.68.195 port 50798 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:19.159750 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:19.191058 systemd[1]: Started session-17.scope. Feb 9 18:56:19.192941 systemd-logind[1550]: New session 17 of user core. Feb 9 18:56:20.588640 sshd[4280]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:20.619391 systemd[1]: sshd@16-172.31.24.123:22-139.178.68.195:50798.service: Deactivated successfully. Feb 9 18:56:20.624334 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:56:20.627800 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:56:20.636875 systemd[1]: Started sshd@17-172.31.24.123:22-139.178.68.195:50810.service. Feb 9 18:56:20.638379 systemd-logind[1550]: Removed session 17. Feb 9 18:56:20.810380 sshd[4297]: Accepted publickey for core from 139.178.68.195 port 50810 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:20.811783 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:20.817582 systemd-logind[1550]: New session 18 of user core. Feb 9 18:56:20.819030 systemd[1]: Started session-18.scope. Feb 9 18:56:21.557599 sshd[4297]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:21.571747 systemd[1]: sshd@17-172.31.24.123:22-139.178.68.195:50810.service: Deactivated successfully. Feb 9 18:56:21.573564 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:56:21.573613 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:56:21.575436 systemd-logind[1550]: Removed session 18. Feb 9 18:56:21.583819 systemd[1]: Started sshd@18-172.31.24.123:22-139.178.68.195:50814.service. Feb 9 18:56:21.768164 sshd[4308]: Accepted publickey for core from 139.178.68.195 port 50814 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:21.772055 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:21.779634 systemd[1]: Started session-19.scope. Feb 9 18:56:21.780796 systemd-logind[1550]: New session 19 of user core. Feb 9 18:56:21.998226 sshd[4308]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:22.002969 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:56:22.003146 systemd[1]: sshd@18-172.31.24.123:22-139.178.68.195:50814.service: Deactivated successfully. Feb 9 18:56:22.004523 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:56:22.005607 systemd-logind[1550]: Removed session 19. Feb 9 18:56:27.029367 systemd[1]: Started sshd@19-172.31.24.123:22-139.178.68.195:36408.service. Feb 9 18:56:27.213643 sshd[4319]: Accepted publickey for core from 139.178.68.195 port 36408 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:27.215339 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:27.221080 systemd[1]: Started session-20.scope. Feb 9 18:56:27.221677 systemd-logind[1550]: New session 20 of user core. Feb 9 18:56:27.417368 sshd[4319]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:27.421208 systemd[1]: sshd@19-172.31.24.123:22-139.178.68.195:36408.service: Deactivated successfully. Feb 9 18:56:27.422200 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:56:27.423042 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:56:27.423992 systemd-logind[1550]: Removed session 20. Feb 9 18:56:32.445266 systemd[1]: Started sshd@20-172.31.24.123:22-139.178.68.195:36418.service. Feb 9 18:56:32.604313 sshd[4333]: Accepted publickey for core from 139.178.68.195 port 36418 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:32.606406 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:32.614140 systemd[1]: Started session-21.scope. Feb 9 18:56:32.615052 systemd-logind[1550]: New session 21 of user core. Feb 9 18:56:32.812516 sshd[4333]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:32.817295 systemd[1]: sshd@20-172.31.24.123:22-139.178.68.195:36418.service: Deactivated successfully. Feb 9 18:56:32.818470 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:56:32.819334 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:56:32.820689 systemd-logind[1550]: Removed session 21. Feb 9 18:56:37.840111 systemd[1]: Started sshd@21-172.31.24.123:22-139.178.68.195:51826.service. Feb 9 18:56:38.008500 sshd[4347]: Accepted publickey for core from 139.178.68.195 port 51826 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:38.013116 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:38.029574 systemd-logind[1550]: New session 22 of user core. Feb 9 18:56:38.030518 systemd[1]: Started session-22.scope. Feb 9 18:56:38.247190 sshd[4347]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:38.251910 systemd[1]: sshd@21-172.31.24.123:22-139.178.68.195:51826.service: Deactivated successfully. Feb 9 18:56:38.253180 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:56:38.253892 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:56:38.254940 systemd-logind[1550]: Removed session 22. Feb 9 18:56:43.274290 systemd[1]: Started sshd@22-172.31.24.123:22-139.178.68.195:51836.service. Feb 9 18:56:43.444184 sshd[4359]: Accepted publickey for core from 139.178.68.195 port 51836 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:43.446784 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:43.452387 systemd[1]: Started session-23.scope. Feb 9 18:56:43.453053 systemd-logind[1550]: New session 23 of user core. Feb 9 18:56:43.659282 sshd[4359]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:43.662639 systemd[1]: sshd@22-172.31.24.123:22-139.178.68.195:51836.service: Deactivated successfully. Feb 9 18:56:43.663366 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:56:43.664057 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:56:43.664982 systemd-logind[1550]: Removed session 23. Feb 9 18:56:43.688606 systemd[1]: Started sshd@23-172.31.24.123:22-139.178.68.195:51842.service. Feb 9 18:56:43.869376 sshd[4371]: Accepted publickey for core from 139.178.68.195 port 51842 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:43.872207 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:43.885757 systemd-logind[1550]: New session 24 of user core. Feb 9 18:56:43.886557 systemd[1]: Started session-24.scope. Feb 9 18:56:45.794773 env[1558]: time="2024-02-09T18:56:45.794728842Z" level=info msg="StopContainer for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" with timeout 30 (s)" Feb 9 18:56:45.797025 env[1558]: time="2024-02-09T18:56:45.796959752Z" level=info msg="Stop container \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" with signal terminated" Feb 9 18:56:45.820387 systemd[1]: run-containerd-runc-k8s.io-8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16-runc.XgQIE3.mount: Deactivated successfully. Feb 9 18:56:45.849026 systemd[1]: cri-containerd-59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318.scope: Deactivated successfully. Feb 9 18:56:45.874270 env[1558]: time="2024-02-09T18:56:45.874206837Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:56:45.888514 env[1558]: time="2024-02-09T18:56:45.888448003Z" level=info msg="StopContainer for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" with timeout 1 (s)" Feb 9 18:56:45.890251 env[1558]: time="2024-02-09T18:56:45.890205678Z" level=info msg="Stop container \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" with signal terminated" Feb 9 18:56:45.894671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318-rootfs.mount: Deactivated successfully. Feb 9 18:56:45.907565 systemd-networkd[1374]: lxc_health: Link DOWN Feb 9 18:56:45.907574 systemd-networkd[1374]: lxc_health: Lost carrier Feb 9 18:56:45.931283 env[1558]: time="2024-02-09T18:56:45.931202076Z" level=info msg="shim disconnected" id=59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318 Feb 9 18:56:45.932728 env[1558]: time="2024-02-09T18:56:45.932467565Z" level=warning msg="cleaning up after shim disconnected" id=59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318 namespace=k8s.io Feb 9 18:56:45.933743 env[1558]: time="2024-02-09T18:56:45.933720528Z" level=info msg="cleaning up dead shim" Feb 9 18:56:45.947781 env[1558]: time="2024-02-09T18:56:45.947735806Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4427 runtime=io.containerd.runc.v2\n" Feb 9 18:56:45.955021 env[1558]: time="2024-02-09T18:56:45.954973367Z" level=info msg="StopContainer for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" returns successfully" Feb 9 18:56:45.955987 env[1558]: time="2024-02-09T18:56:45.955956777Z" level=info msg="StopPodSandbox for \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\"" Feb 9 18:56:45.959664 env[1558]: time="2024-02-09T18:56:45.956164154Z" level=info msg="Container to stop \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:45.958728 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1-shm.mount: Deactivated successfully. Feb 9 18:56:46.054961 systemd[1]: cri-containerd-8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16.scope: Deactivated successfully. Feb 9 18:56:46.055375 systemd[1]: cri-containerd-8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16.scope: Consumed 8.528s CPU time. Feb 9 18:56:46.057337 systemd[1]: cri-containerd-2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1.scope: Deactivated successfully. Feb 9 18:56:46.113526 env[1558]: time="2024-02-09T18:56:46.113434221Z" level=info msg="shim disconnected" id=8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16 Feb 9 18:56:46.115562 env[1558]: time="2024-02-09T18:56:46.115463245Z" level=warning msg="cleaning up after shim disconnected" id=8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16 namespace=k8s.io Feb 9 18:56:46.115562 env[1558]: time="2024-02-09T18:56:46.115553502Z" level=info msg="cleaning up dead shim" Feb 9 18:56:46.115818 env[1558]: time="2024-02-09T18:56:46.114602326Z" level=info msg="shim disconnected" id=2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1 Feb 9 18:56:46.115933 env[1558]: time="2024-02-09T18:56:46.115832349Z" level=warning msg="cleaning up after shim disconnected" id=2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1 namespace=k8s.io Feb 9 18:56:46.115933 env[1558]: time="2024-02-09T18:56:46.115847988Z" level=info msg="cleaning up dead shim" Feb 9 18:56:46.135398 env[1558]: time="2024-02-09T18:56:46.135353799Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4473 runtime=io.containerd.runc.v2\n" Feb 9 18:56:46.139120 env[1558]: time="2024-02-09T18:56:46.139069032Z" level=info msg="StopContainer for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" returns successfully" Feb 9 18:56:46.141046 env[1558]: time="2024-02-09T18:56:46.140446563Z" level=info msg="StopPodSandbox for \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\"" Feb 9 18:56:46.141398 env[1558]: time="2024-02-09T18:56:46.141368769Z" level=info msg="Container to stop \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:46.142522 env[1558]: time="2024-02-09T18:56:46.142474803Z" level=info msg="Container to stop \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:46.142809 env[1558]: time="2024-02-09T18:56:46.142710391Z" level=info msg="Container to stop \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:46.142929 env[1558]: time="2024-02-09T18:56:46.142907720Z" level=info msg="Container to stop \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:46.143042 env[1558]: time="2024-02-09T18:56:46.143019020Z" level=info msg="Container to stop \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:46.147388 env[1558]: time="2024-02-09T18:56:46.147348901Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4474 runtime=io.containerd.runc.v2\n" Feb 9 18:56:46.148265 env[1558]: time="2024-02-09T18:56:46.148224692Z" level=info msg="TearDown network for sandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" successfully" Feb 9 18:56:46.148366 env[1558]: time="2024-02-09T18:56:46.148263968Z" level=info msg="StopPodSandbox for \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" returns successfully" Feb 9 18:56:46.177497 systemd[1]: cri-containerd-0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015.scope: Deactivated successfully. Feb 9 18:56:46.235574 env[1558]: time="2024-02-09T18:56:46.235521885Z" level=info msg="shim disconnected" id=0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015 Feb 9 18:56:46.235950 env[1558]: time="2024-02-09T18:56:46.235918711Z" level=warning msg="cleaning up after shim disconnected" id=0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015 namespace=k8s.io Feb 9 18:56:46.236044 env[1558]: time="2024-02-09T18:56:46.235953028Z" level=info msg="cleaning up dead shim" Feb 9 18:56:46.250319 env[1558]: time="2024-02-09T18:56:46.250117613Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4521 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:56:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 18:56:46.252340 env[1558]: time="2024-02-09T18:56:46.252302703Z" level=info msg="TearDown network for sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" successfully" Feb 9 18:56:46.252340 env[1558]: time="2024-02-09T18:56:46.252335405Z" level=info msg="StopPodSandbox for \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" returns successfully" Feb 9 18:56:46.314907 kubelet[2540]: I0209 18:56:46.314796 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzwlh\" (UniqueName: \"kubernetes.io/projected/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-kube-api-access-wzwlh\") pod \"0e736cb8-9fc9-49c0-a3fb-387f22d06ade\" (UID: \"0e736cb8-9fc9-49c0-a3fb-387f22d06ade\") " Feb 9 18:56:46.314907 kubelet[2540]: I0209 18:56:46.314892 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-cilium-config-path\") pod \"0e736cb8-9fc9-49c0-a3fb-387f22d06ade\" (UID: \"0e736cb8-9fc9-49c0-a3fb-387f22d06ade\") " Feb 9 18:56:46.317474 kubelet[2540]: W0209 18:56:46.317428 2540 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0e736cb8-9fc9-49c0-a3fb-387f22d06ade/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:56:46.322752 kubelet[2540]: I0209 18:56:46.321336 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e736cb8-9fc9-49c0-a3fb-387f22d06ade" (UID: "0e736cb8-9fc9-49c0-a3fb-387f22d06ade"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:56:46.338338 kubelet[2540]: I0209 18:56:46.338288 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-kube-api-access-wzwlh" (OuterVolumeSpecName: "kube-api-access-wzwlh") pod "0e736cb8-9fc9-49c0-a3fb-387f22d06ade" (UID: "0e736cb8-9fc9-49c0-a3fb-387f22d06ade"). InnerVolumeSpecName "kube-api-access-wzwlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:56:46.415952 kubelet[2540]: I0209 18:56:46.415911 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-net\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.415952 kubelet[2540]: I0209 18:56:46.415969 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-run\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416206 kubelet[2540]: I0209 18:56:46.415993 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-hostproc\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416206 kubelet[2540]: I0209 18:56:46.416017 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-etc-cni-netd\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416206 kubelet[2540]: I0209 18:56:46.416040 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cni-path\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416206 kubelet[2540]: I0209 18:56:46.416064 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-lib-modules\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416206 kubelet[2540]: I0209 18:56:46.416100 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cff362a0-3168-4df3-a0c2-53439f456212-clustermesh-secrets\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416206 kubelet[2540]: I0209 18:56:46.416124 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-cgroup\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416467 kubelet[2540]: I0209 18:56:46.416146 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-xtables-lock\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416467 kubelet[2540]: I0209 18:56:46.416174 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-bpf-maps\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416467 kubelet[2540]: I0209 18:56:46.416201 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-kernel\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416467 kubelet[2540]: I0209 18:56:46.416238 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdkd\" (UniqueName: \"kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-kube-api-access-wxdkd\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416467 kubelet[2540]: I0209 18:56:46.416269 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-hubble-tls\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416467 kubelet[2540]: I0209 18:56:46.416307 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff362a0-3168-4df3-a0c2-53439f456212-cilium-config-path\") pod \"cff362a0-3168-4df3-a0c2-53439f456212\" (UID: \"cff362a0-3168-4df3-a0c2-53439f456212\") " Feb 9 18:56:46.416656 kubelet[2540]: I0209 18:56:46.416361 2540 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wzwlh\" (UniqueName: \"kubernetes.io/projected/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-kube-api-access-wzwlh\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.416656 kubelet[2540]: I0209 18:56:46.416381 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e736cb8-9fc9-49c0-a3fb-387f22d06ade-cilium-config-path\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.417019 kubelet[2540]: I0209 18:56:46.416988 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417118 kubelet[2540]: I0209 18:56:46.417042 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417118 kubelet[2540]: I0209 18:56:46.417066 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417118 kubelet[2540]: I0209 18:56:46.417087 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417355 kubelet[2540]: I0209 18:56:46.417332 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417471 kubelet[2540]: I0209 18:56:46.417441 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417556 kubelet[2540]: I0209 18:56:46.417468 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-hostproc" (OuterVolumeSpecName: "hostproc") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417556 kubelet[2540]: I0209 18:56:46.417502 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417556 kubelet[2540]: I0209 18:56:46.417524 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cni-path" (OuterVolumeSpecName: "cni-path") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.417556 kubelet[2540]: I0209 18:56:46.417544 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:46.419678 kubelet[2540]: W0209 18:56:46.419554 2540 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/cff362a0-3168-4df3-a0c2-53439f456212/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:56:46.423137 kubelet[2540]: I0209 18:56:46.423088 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff362a0-3168-4df3-a0c2-53439f456212-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:56:46.423942 kubelet[2540]: I0209 18:56:46.423912 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff362a0-3168-4df3-a0c2-53439f456212-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:56:46.426880 kubelet[2540]: I0209 18:56:46.426796 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:56:46.428321 kubelet[2540]: I0209 18:56:46.428291 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-kube-api-access-wxdkd" (OuterVolumeSpecName: "kube-api-access-wxdkd") pod "cff362a0-3168-4df3-a0c2-53439f456212" (UID: "cff362a0-3168-4df3-a0c2-53439f456212"). InnerVolumeSpecName "kube-api-access-wxdkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:56:46.516571 kubelet[2540]: I0209 18:56:46.516525 2540 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-bpf-maps\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516571 kubelet[2540]: I0209 18:56:46.516562 2540 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-kernel\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516571 kubelet[2540]: I0209 18:56:46.516578 2540 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wxdkd\" (UniqueName: \"kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-kube-api-access-wxdkd\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516593 2540 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cff362a0-3168-4df3-a0c2-53439f456212-hubble-tls\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516606 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff362a0-3168-4df3-a0c2-53439f456212-cilium-config-path\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516619 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-run\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516630 2540 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-hostproc\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516643 2540 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-host-proc-sys-net\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516655 2540 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-etc-cni-netd\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516667 2540 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cni-path\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.516822 kubelet[2540]: I0209 18:56:46.516679 2540 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-lib-modules\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.517161 kubelet[2540]: I0209 18:56:46.516693 2540 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cff362a0-3168-4df3-a0c2-53439f456212-clustermesh-secrets\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.517161 kubelet[2540]: I0209 18:56:46.516706 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-cilium-cgroup\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.517161 kubelet[2540]: I0209 18:56:46.516719 2540 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cff362a0-3168-4df3-a0c2-53439f456212-xtables-lock\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:46.687131 kubelet[2540]: I0209 18:56:46.687032 2540 scope.go:115] "RemoveContainer" containerID="8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16" Feb 9 18:56:46.698866 systemd[1]: Removed slice kubepods-burstable-podcff362a0_3168_4df3_a0c2_53439f456212.slice. Feb 9 18:56:46.698992 systemd[1]: kubepods-burstable-podcff362a0_3168_4df3_a0c2_53439f456212.slice: Consumed 8.657s CPU time. Feb 9 18:56:46.701950 env[1558]: time="2024-02-09T18:56:46.701907771Z" level=info msg="RemoveContainer for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\"" Feb 9 18:56:46.717162 env[1558]: time="2024-02-09T18:56:46.717119238Z" level=info msg="RemoveContainer for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" returns successfully" Feb 9 18:56:46.718176 kubelet[2540]: I0209 18:56:46.718121 2540 scope.go:115] "RemoveContainer" containerID="5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176" Feb 9 18:56:46.720799 env[1558]: time="2024-02-09T18:56:46.720445650Z" level=info msg="RemoveContainer for \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\"" Feb 9 18:56:46.727131 env[1558]: time="2024-02-09T18:56:46.726799635Z" level=info msg="RemoveContainer for \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\" returns successfully" Feb 9 18:56:46.727656 kubelet[2540]: I0209 18:56:46.727616 2540 scope.go:115] "RemoveContainer" containerID="ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413" Feb 9 18:56:46.731881 env[1558]: time="2024-02-09T18:56:46.731842181Z" level=info msg="RemoveContainer for \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\"" Feb 9 18:56:46.736730 systemd[1]: Removed slice kubepods-besteffort-pod0e736cb8_9fc9_49c0_a3fb_387f22d06ade.slice. Feb 9 18:56:46.738451 env[1558]: time="2024-02-09T18:56:46.738417134Z" level=info msg="RemoveContainer for \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\" returns successfully" Feb 9 18:56:46.739086 kubelet[2540]: I0209 18:56:46.739052 2540 scope.go:115] "RemoveContainer" containerID="91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574" Feb 9 18:56:46.742052 env[1558]: time="2024-02-09T18:56:46.741707220Z" level=info msg="RemoveContainer for \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\"" Feb 9 18:56:46.746258 env[1558]: time="2024-02-09T18:56:46.746223940Z" level=info msg="RemoveContainer for \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\" returns successfully" Feb 9 18:56:46.746504 kubelet[2540]: I0209 18:56:46.746462 2540 scope.go:115] "RemoveContainer" containerID="3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed" Feb 9 18:56:46.748448 env[1558]: time="2024-02-09T18:56:46.748412170Z" level=info msg="RemoveContainer for \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\"" Feb 9 18:56:46.759646 env[1558]: time="2024-02-09T18:56:46.759585639Z" level=info msg="RemoveContainer for \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\" returns successfully" Feb 9 18:56:46.760172 kubelet[2540]: I0209 18:56:46.760068 2540 scope.go:115] "RemoveContainer" containerID="8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16" Feb 9 18:56:46.761061 env[1558]: time="2024-02-09T18:56:46.760858785Z" level=error msg="ContainerStatus for \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\": not found" Feb 9 18:56:46.764187 kubelet[2540]: E0209 18:56:46.764159 2540 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\": not found" containerID="8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16" Feb 9 18:56:46.764576 kubelet[2540]: I0209 18:56:46.764552 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16} err="failed to get container status \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16\": not found" Feb 9 18:56:46.764576 kubelet[2540]: I0209 18:56:46.764576 2540 scope.go:115] "RemoveContainer" containerID="5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176" Feb 9 18:56:46.765028 env[1558]: time="2024-02-09T18:56:46.764953866Z" level=error msg="ContainerStatus for \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\": not found" Feb 9 18:56:46.765158 kubelet[2540]: E0209 18:56:46.765146 2540 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\": not found" containerID="5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176" Feb 9 18:56:46.765650 kubelet[2540]: I0209 18:56:46.765179 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176} err="failed to get container status \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f69125b7c7aaa931ce1c587b01c401123a43ae01775bb486c620840d9666176\": not found" Feb 9 18:56:46.765650 kubelet[2540]: I0209 18:56:46.765195 2540 scope.go:115] "RemoveContainer" containerID="ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413" Feb 9 18:56:46.766286 env[1558]: time="2024-02-09T18:56:46.766188119Z" level=error msg="ContainerStatus for \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\": not found" Feb 9 18:56:46.766713 kubelet[2540]: E0209 18:56:46.766688 2540 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\": not found" containerID="ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413" Feb 9 18:56:46.766786 kubelet[2540]: I0209 18:56:46.766728 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413} err="failed to get container status \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d9b58020b287131af06f0df093a54bb92590cddaa5639a4f582680d8b3413\": not found" Feb 9 18:56:46.766786 kubelet[2540]: I0209 18:56:46.766742 2540 scope.go:115] "RemoveContainer" containerID="91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574" Feb 9 18:56:46.767292 env[1558]: time="2024-02-09T18:56:46.766972378Z" level=error msg="ContainerStatus for \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\": not found" Feb 9 18:56:46.767458 kubelet[2540]: E0209 18:56:46.767438 2540 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\": not found" containerID="91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574" Feb 9 18:56:46.767633 kubelet[2540]: I0209 18:56:46.767471 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574} err="failed to get container status \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\": rpc error: code = NotFound desc = an error occurred when try to find container \"91dcc7b22494f1d519057c60200a6593aea910a9accef170bc35877e0a291574\": not found" Feb 9 18:56:46.767633 kubelet[2540]: I0209 18:56:46.767499 2540 scope.go:115] "RemoveContainer" containerID="3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed" Feb 9 18:56:46.767916 env[1558]: time="2024-02-09T18:56:46.767792366Z" level=error msg="ContainerStatus for \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\": not found" Feb 9 18:56:46.768058 kubelet[2540]: E0209 18:56:46.768037 2540 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\": not found" containerID="3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed" Feb 9 18:56:46.768126 kubelet[2540]: I0209 18:56:46.768070 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed} err="failed to get container status \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"3938fcc0b6801d216f49b882e0cbff2852b23a5eb1719b3c86054d7e5dc229ed\": not found" Feb 9 18:56:46.768126 kubelet[2540]: I0209 18:56:46.768083 2540 scope.go:115] "RemoveContainer" containerID="59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318" Feb 9 18:56:46.771085 env[1558]: time="2024-02-09T18:56:46.770882511Z" level=info msg="RemoveContainer for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\"" Feb 9 18:56:46.777710 env[1558]: time="2024-02-09T18:56:46.777666967Z" level=info msg="RemoveContainer for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" returns successfully" Feb 9 18:56:46.778033 kubelet[2540]: I0209 18:56:46.778004 2540 scope.go:115] "RemoveContainer" containerID="59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318" Feb 9 18:56:46.779115 env[1558]: time="2024-02-09T18:56:46.778286547Z" level=error msg="ContainerStatus for \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\": not found" Feb 9 18:56:46.779375 kubelet[2540]: E0209 18:56:46.779343 2540 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\": not found" containerID="59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318" Feb 9 18:56:46.779825 kubelet[2540]: I0209 18:56:46.779385 2540 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318} err="failed to get container status \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\": rpc error: code = NotFound desc = an error occurred when try to find container \"59ce63c5596b493c3409bfc1c945bd45ca696a1998d4608857682352e9d9d318\": not found" Feb 9 18:56:46.813302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e839a7cf9b23342aa5b065d428a728459ad94eb4bfe8cc2df3fc6fd3cfaac16-rootfs.mount: Deactivated successfully. Feb 9 18:56:46.813432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015-rootfs.mount: Deactivated successfully. Feb 9 18:56:46.813541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015-shm.mount: Deactivated successfully. Feb 9 18:56:46.813633 systemd[1]: var-lib-kubelet-pods-cff362a0\x2d3168\x2d4df3\x2da0c2\x2d53439f456212-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:56:46.813721 systemd[1]: var-lib-kubelet-pods-cff362a0\x2d3168\x2d4df3\x2da0c2\x2d53439f456212-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:56:46.813796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1-rootfs.mount: Deactivated successfully. Feb 9 18:56:46.813922 systemd[1]: var-lib-kubelet-pods-0e736cb8\x2d9fc9\x2d49c0\x2da3fb\x2d387f22d06ade-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwzwlh.mount: Deactivated successfully. Feb 9 18:56:46.814010 systemd[1]: var-lib-kubelet-pods-cff362a0\x2d3168\x2d4df3\x2da0c2\x2d53439f456212-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwxdkd.mount: Deactivated successfully. Feb 9 18:56:47.747239 sshd[4371]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:47.751509 systemd[1]: sshd@23-172.31.24.123:22-139.178.68.195:51842.service: Deactivated successfully. Feb 9 18:56:47.753208 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:56:47.753233 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:56:47.755581 systemd-logind[1550]: Removed session 24. Feb 9 18:56:47.772961 systemd[1]: Started sshd@24-172.31.24.123:22-139.178.68.195:34330.service. Feb 9 18:56:47.950987 sshd[4540]: Accepted publickey for core from 139.178.68.195 port 34330 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:47.953014 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:47.959803 systemd[1]: Started session-25.scope. Feb 9 18:56:47.961109 systemd-logind[1550]: New session 25 of user core. Feb 9 18:56:48.288846 kubelet[2540]: I0209 18:56:48.288815 2540 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=0e736cb8-9fc9-49c0-a3fb-387f22d06ade path="/var/lib/kubelet/pods/0e736cb8-9fc9-49c0-a3fb-387f22d06ade/volumes" Feb 9 18:56:48.290822 kubelet[2540]: I0209 18:56:48.290785 2540 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=cff362a0-3168-4df3-a0c2-53439f456212 path="/var/lib/kubelet/pods/cff362a0-3168-4df3-a0c2-53439f456212/volumes" Feb 9 18:56:48.306581 env[1558]: time="2024-02-09T18:56:48.306536436Z" level=info msg="StopPodSandbox for \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\"" Feb 9 18:56:48.307346 env[1558]: time="2024-02-09T18:56:48.306643455Z" level=info msg="TearDown network for sandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" successfully" Feb 9 18:56:48.307346 env[1558]: time="2024-02-09T18:56:48.306687609Z" level=info msg="StopPodSandbox for \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" returns successfully" Feb 9 18:56:48.307574 env[1558]: time="2024-02-09T18:56:48.307543472Z" level=info msg="RemovePodSandbox for \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\"" Feb 9 18:56:48.307638 env[1558]: time="2024-02-09T18:56:48.307590911Z" level=info msg="Forcibly stopping sandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\"" Feb 9 18:56:48.307706 env[1558]: time="2024-02-09T18:56:48.307685228Z" level=info msg="TearDown network for sandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" successfully" Feb 9 18:56:48.313648 env[1558]: time="2024-02-09T18:56:48.313602438Z" level=info msg="RemovePodSandbox \"2765fe31f9b7391d4a525993b7bb0dd64ee37e0625038959f046d5c738536dc1\" returns successfully" Feb 9 18:56:48.314166 env[1558]: time="2024-02-09T18:56:48.314137655Z" level=info msg="StopPodSandbox for \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\"" Feb 9 18:56:48.314270 env[1558]: time="2024-02-09T18:56:48.314225576Z" level=info msg="TearDown network for sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" successfully" Feb 9 18:56:48.314330 env[1558]: time="2024-02-09T18:56:48.314272993Z" level=info msg="StopPodSandbox for \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" returns successfully" Feb 9 18:56:48.314731 env[1558]: time="2024-02-09T18:56:48.314691358Z" level=info msg="RemovePodSandbox for \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\"" Feb 9 18:56:48.314808 env[1558]: time="2024-02-09T18:56:48.314737095Z" level=info msg="Forcibly stopping sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\"" Feb 9 18:56:48.314868 env[1558]: time="2024-02-09T18:56:48.314829601Z" level=info msg="TearDown network for sandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" successfully" Feb 9 18:56:48.320360 env[1558]: time="2024-02-09T18:56:48.320030541Z" level=info msg="RemovePodSandbox \"0f4f40a6eb8a03cd0f2ae5c0ab4081a96fb5d1d5ee26ca6b730c6d20b9669015\" returns successfully" Feb 9 18:56:48.459955 kubelet[2540]: E0209 18:56:48.459918 2540 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:56:49.096169 sshd[4540]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:49.099958 systemd[1]: sshd@24-172.31.24.123:22-139.178.68.195:34330.service: Deactivated successfully. Feb 9 18:56:49.101242 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:56:49.105316 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:56:49.106919 systemd-logind[1550]: Removed session 25. Feb 9 18:56:49.125020 systemd[1]: Started sshd@25-172.31.24.123:22-139.178.68.195:34338.service. Feb 9 18:56:49.151574 kubelet[2540]: I0209 18:56:49.151527 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:56:49.151750 kubelet[2540]: E0209 18:56:49.151677 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cff362a0-3168-4df3-a0c2-53439f456212" containerName="mount-bpf-fs" Feb 9 18:56:49.151750 kubelet[2540]: E0209 18:56:49.151697 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cff362a0-3168-4df3-a0c2-53439f456212" containerName="clean-cilium-state" Feb 9 18:56:49.151750 kubelet[2540]: E0209 18:56:49.151707 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cff362a0-3168-4df3-a0c2-53439f456212" containerName="cilium-agent" Feb 9 18:56:49.151750 kubelet[2540]: E0209 18:56:49.151717 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e736cb8-9fc9-49c0-a3fb-387f22d06ade" containerName="cilium-operator" Feb 9 18:56:49.151750 kubelet[2540]: E0209 18:56:49.151728 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cff362a0-3168-4df3-a0c2-53439f456212" containerName="mount-cgroup" Feb 9 18:56:49.151750 kubelet[2540]: E0209 18:56:49.151738 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cff362a0-3168-4df3-a0c2-53439f456212" containerName="apply-sysctl-overwrites" Feb 9 18:56:49.152004 kubelet[2540]: I0209 18:56:49.151778 2540 memory_manager.go:346] "RemoveStaleState removing state" podUID="cff362a0-3168-4df3-a0c2-53439f456212" containerName="cilium-agent" Feb 9 18:56:49.152004 kubelet[2540]: I0209 18:56:49.151791 2540 memory_manager.go:346] "RemoveStaleState removing state" podUID="0e736cb8-9fc9-49c0-a3fb-387f22d06ade" containerName="cilium-operator" Feb 9 18:56:49.160627 systemd[1]: Created slice kubepods-burstable-podd09128a8_a647_4059_885f_133d0f2f57d1.slice. Feb 9 18:56:49.208746 kubelet[2540]: W0209 18:56:49.208701 2540 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.208916 kubelet[2540]: W0209 18:56:49.208778 2540 reflector.go:533] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.210255 kubelet[2540]: E0209 18:56:49.210181 2540 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.210710 kubelet[2540]: W0209 18:56:49.210692 2540 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.210789 kubelet[2540]: E0209 18:56:49.210731 2540 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.210854 kubelet[2540]: W0209 18:56:49.210805 2540 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.210854 kubelet[2540]: E0209 18:56:49.210821 2540 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.210854 kubelet[2540]: E0209 18:56:49.210840 2540 reflector.go:148] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-24-123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-123' and this object Feb 9 18:56:49.241746 kubelet[2540]: I0209 18:56:49.241232 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-ipsec-secrets\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.241746 kubelet[2540]: I0209 18:56:49.241281 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-config-path\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.241746 kubelet[2540]: I0209 18:56:49.241311 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-hubble-tls\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.241746 kubelet[2540]: I0209 18:56:49.241340 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-xtables-lock\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.241746 kubelet[2540]: I0209 18:56:49.241368 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-cgroup\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.241746 kubelet[2540]: I0209 18:56:49.241401 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-clustermesh-secrets\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242135 kubelet[2540]: I0209 18:56:49.241429 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-run\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242135 kubelet[2540]: I0209 18:56:49.241461 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-kernel\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242135 kubelet[2540]: I0209 18:56:49.241509 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzr5\" (UniqueName: \"kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-kube-api-access-ctzr5\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242135 kubelet[2540]: I0209 18:56:49.241544 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-bpf-maps\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242135 kubelet[2540]: I0209 18:56:49.241574 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cni-path\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242135 kubelet[2540]: I0209 18:56:49.241605 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-hostproc\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242443 kubelet[2540]: I0209 18:56:49.241633 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-etc-cni-netd\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242443 kubelet[2540]: I0209 18:56:49.241663 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-lib-modules\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.242443 kubelet[2540]: I0209 18:56:49.241693 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-net\") pod \"cilium-fdml2\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " pod="kube-system/cilium-fdml2" Feb 9 18:56:49.301990 sshd[4552]: Accepted publickey for core from 139.178.68.195 port 34338 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:49.304778 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:49.312507 systemd-logind[1550]: New session 26 of user core. Feb 9 18:56:49.314069 systemd[1]: Started session-26.scope. Feb 9 18:56:49.608852 sshd[4552]: pam_unix(sshd:session): session closed for user core Feb 9 18:56:49.614623 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. Feb 9 18:56:49.615704 systemd[1]: sshd@25-172.31.24.123:22-139.178.68.195:34338.service: Deactivated successfully. Feb 9 18:56:49.616766 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 18:56:49.619092 systemd-logind[1550]: Removed session 26. Feb 9 18:56:49.634568 systemd[1]: Started sshd@26-172.31.24.123:22-139.178.68.195:34354.service. Feb 9 18:56:49.803556 sshd[4564]: Accepted publickey for core from 139.178.68.195 port 34354 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 18:56:49.807566 sshd[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:56:49.829767 systemd-logind[1550]: New session 27 of user core. Feb 9 18:56:49.831057 systemd[1]: Started session-27.scope. Feb 9 18:56:50.370687 env[1558]: time="2024-02-09T18:56:50.370635958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdml2,Uid:d09128a8-a647-4059-885f-133d0f2f57d1,Namespace:kube-system,Attempt:0,}" Feb 9 18:56:50.409013 env[1558]: time="2024-02-09T18:56:50.407961623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:56:50.409013 env[1558]: time="2024-02-09T18:56:50.408078584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:56:50.409013 env[1558]: time="2024-02-09T18:56:50.408112024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:56:50.409269 env[1558]: time="2024-02-09T18:56:50.409082038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5 pid=4583 runtime=io.containerd.runc.v2 Feb 9 18:56:50.440263 systemd[1]: Started cri-containerd-d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5.scope. Feb 9 18:56:50.483803 env[1558]: time="2024-02-09T18:56:50.483755042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fdml2,Uid:d09128a8-a647-4059-885f-133d0f2f57d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5\"" Feb 9 18:56:50.490572 env[1558]: time="2024-02-09T18:56:50.490534930Z" level=info msg="CreateContainer within sandbox \"d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:56:50.514933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631652555.mount: Deactivated successfully. Feb 9 18:56:50.530436 env[1558]: time="2024-02-09T18:56:50.530364954Z" level=info msg="CreateContainer within sandbox \"d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\"" Feb 9 18:56:50.535024 env[1558]: time="2024-02-09T18:56:50.534988501Z" level=info msg="StartContainer for \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\"" Feb 9 18:56:50.562216 systemd[1]: Started cri-containerd-fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0.scope. Feb 9 18:56:50.585909 systemd[1]: cri-containerd-fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0.scope: Deactivated successfully. Feb 9 18:56:50.618296 env[1558]: time="2024-02-09T18:56:50.618223447Z" level=info msg="shim disconnected" id=fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0 Feb 9 18:56:50.618765 env[1558]: time="2024-02-09T18:56:50.618722271Z" level=warning msg="cleaning up after shim disconnected" id=fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0 namespace=k8s.io Feb 9 18:56:50.618967 env[1558]: time="2024-02-09T18:56:50.618947353Z" level=info msg="cleaning up dead shim" Feb 9 18:56:50.626552 kubelet[2540]: I0209 18:56:50.626427 2540 setters.go:548] "Node became not ready" node="ip-172-31-24-123" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:56:50.62566986 +0000 UTC m=+122.659399193 LastTransitionTime:2024-02-09 18:56:50.62566986 +0000 UTC m=+122.659399193 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:56:50.634220 env[1558]: time="2024-02-09T18:56:50.634162555Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4646 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:56:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:56:50.634928 env[1558]: time="2024-02-09T18:56:50.634808911Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 9 18:56:50.639167 env[1558]: time="2024-02-09T18:56:50.638567942Z" level=error msg="Failed to pipe stdout of container \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\"" error="reading from a closed fifo" Feb 9 18:56:50.639370 env[1558]: time="2024-02-09T18:56:50.638703698Z" level=error msg="Failed to pipe stderr of container \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\"" error="reading from a closed fifo" Feb 9 18:56:50.641339 env[1558]: time="2024-02-09T18:56:50.641251174Z" level=error msg="StartContainer for \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:56:50.641835 kubelet[2540]: E0209 18:56:50.641792 2540 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0" Feb 9 18:56:50.643259 kubelet[2540]: E0209 18:56:50.642695 2540 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:56:50.643259 kubelet[2540]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:56:50.643259 kubelet[2540]: rm /hostbin/cilium-mount Feb 9 18:56:50.643446 kubelet[2540]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ctzr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-fdml2_kube-system(d09128a8-a647-4059-885f-133d0f2f57d1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:56:50.643446 kubelet[2540]: E0209 18:56:50.643242 2540 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fdml2" podUID=d09128a8-a647-4059-885f-133d0f2f57d1 Feb 9 18:56:50.721055 env[1558]: time="2024-02-09T18:56:50.721004898Z" level=info msg="StopPodSandbox for \"d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5\"" Feb 9 18:56:50.721217 env[1558]: time="2024-02-09T18:56:50.721074677Z" level=info msg="Container to stop \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:56:50.732444 systemd[1]: cri-containerd-d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5.scope: Deactivated successfully. Feb 9 18:56:50.775311 env[1558]: time="2024-02-09T18:56:50.775260599Z" level=info msg="shim disconnected" id=d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5 Feb 9 18:56:50.775633 env[1558]: time="2024-02-09T18:56:50.775607954Z" level=warning msg="cleaning up after shim disconnected" id=d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5 namespace=k8s.io Feb 9 18:56:50.775752 env[1558]: time="2024-02-09T18:56:50.775735633Z" level=info msg="cleaning up dead shim" Feb 9 18:56:50.786750 env[1558]: time="2024-02-09T18:56:50.786702864Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4676 runtime=io.containerd.runc.v2\n" Feb 9 18:56:50.787102 env[1558]: time="2024-02-09T18:56:50.787068407Z" level=info msg="TearDown network for sandbox \"d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5\" successfully" Feb 9 18:56:50.787179 env[1558]: time="2024-02-09T18:56:50.787100545Z" level=info msg="StopPodSandbox for \"d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5\" returns successfully" Feb 9 18:56:50.854678 kubelet[2540]: I0209 18:56:50.854634 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-kernel\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854678 kubelet[2540]: I0209 18:56:50.854685 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-net\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854712 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-etc-cni-netd\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854746 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-ipsec-secrets\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854771 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cni-path\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854795 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-cgroup\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854825 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-clustermesh-secrets\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854848 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-lib-modules\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854874 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-run\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854900 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-xtables-lock\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.854933 kubelet[2540]: I0209 18:56:50.854928 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-hostproc\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.855316 kubelet[2540]: I0209 18:56:50.854959 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-bpf-maps\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.855316 kubelet[2540]: I0209 18:56:50.854994 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-hubble-tls\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.855316 kubelet[2540]: I0209 18:56:50.855029 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctzr5\" (UniqueName: \"kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-kube-api-access-ctzr5\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.855316 kubelet[2540]: I0209 18:56:50.855064 2540 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-config-path\") pod \"d09128a8-a647-4059-885f-133d0f2f57d1\" (UID: \"d09128a8-a647-4059-885f-133d0f2f57d1\") " Feb 9 18:56:50.855516 kubelet[2540]: W0209 18:56:50.855341 2540 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/d09128a8-a647-4059-885f-133d0f2f57d1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:56:50.857515 kubelet[2540]: I0209 18:56:50.856453 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.857515 kubelet[2540]: I0209 18:56:50.856515 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.857515 kubelet[2540]: I0209 18:56:50.856540 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.858065 kubelet[2540]: I0209 18:56:50.858029 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:56:50.858156 kubelet[2540]: I0209 18:56:50.858080 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.858156 kubelet[2540]: I0209 18:56:50.858105 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.858156 kubelet[2540]: I0209 18:56:50.858129 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.858156 kubelet[2540]: I0209 18:56:50.858149 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-hostproc" (OuterVolumeSpecName: "hostproc") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.858338 kubelet[2540]: I0209 18:56:50.858172 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.858860 kubelet[2540]: I0209 18:56:50.858814 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cni-path" (OuterVolumeSpecName: "cni-path") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.859096 kubelet[2540]: I0209 18:56:50.858878 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:56:50.861271 kubelet[2540]: I0209 18:56:50.861235 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:56:50.864130 kubelet[2540]: I0209 18:56:50.864101 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:56:50.864422 kubelet[2540]: I0209 18:56:50.864400 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:56:50.866738 kubelet[2540]: I0209 18:56:50.866708 2540 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-kube-api-access-ctzr5" (OuterVolumeSpecName: "kube-api-access-ctzr5") pod "d09128a8-a647-4059-885f-133d0f2f57d1" (UID: "d09128a8-a647-4059-885f-133d0f2f57d1"). InnerVolumeSpecName "kube-api-access-ctzr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955758 2540 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-hubble-tls\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955802 2540 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ctzr5\" (UniqueName: \"kubernetes.io/projected/d09128a8-a647-4059-885f-133d0f2f57d1-kube-api-access-ctzr5\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955818 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-config-path\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955831 2540 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-etc-cni-netd\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955847 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-ipsec-secrets\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955861 2540 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-kernel\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955874 2540 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-host-proc-sys-net\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955887 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-cgroup\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955900 2540 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cni-path\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955914 2540 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d09128a8-a647-4059-885f-133d0f2f57d1-clustermesh-secrets\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955927 2540 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-lib-modules\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955942 2540 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-cilium-run\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955954 2540 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-xtables-lock\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955968 2540 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-bpf-maps\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:50.956033 kubelet[2540]: I0209 18:56:50.955982 2540 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d09128a8-a647-4059-885f-133d0f2f57d1-hostproc\") on node \"ip-172-31-24-123\" DevicePath \"\"" Feb 9 18:56:51.420862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5-rootfs.mount: Deactivated successfully. Feb 9 18:56:51.421000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2427eee9034c675d7bf7926473943bd8a463f2cbb9fcd00d91a80e7ee213cb5-shm.mount: Deactivated successfully. Feb 9 18:56:51.421085 systemd[1]: var-lib-kubelet-pods-d09128a8\x2da647\x2d4059\x2d885f\x2d133d0f2f57d1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:56:51.421163 systemd[1]: var-lib-kubelet-pods-d09128a8\x2da647\x2d4059\x2d885f\x2d133d0f2f57d1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:56:51.421245 systemd[1]: var-lib-kubelet-pods-d09128a8\x2da647\x2d4059\x2d885f\x2d133d0f2f57d1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:56:51.421333 systemd[1]: var-lib-kubelet-pods-d09128a8\x2da647\x2d4059\x2d885f\x2d133d0f2f57d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dctzr5.mount: Deactivated successfully. Feb 9 18:56:51.724411 kubelet[2540]: I0209 18:56:51.724021 2540 scope.go:115] "RemoveContainer" containerID="fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0" Feb 9 18:56:51.727493 env[1558]: time="2024-02-09T18:56:51.727443637Z" level=info msg="RemoveContainer for \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\"" Feb 9 18:56:51.730918 systemd[1]: Removed slice kubepods-burstable-podd09128a8_a647_4059_885f_133d0f2f57d1.slice. Feb 9 18:56:51.733470 env[1558]: time="2024-02-09T18:56:51.733420995Z" level=info msg="RemoveContainer for \"fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0\" returns successfully" Feb 9 18:56:51.777070 kubelet[2540]: I0209 18:56:51.777030 2540 topology_manager.go:212] "Topology Admit Handler" Feb 9 18:56:51.777273 kubelet[2540]: E0209 18:56:51.777107 2540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d09128a8-a647-4059-885f-133d0f2f57d1" containerName="mount-cgroup" Feb 9 18:56:51.777273 kubelet[2540]: I0209 18:56:51.777138 2540 memory_manager.go:346] "RemoveStaleState removing state" podUID="d09128a8-a647-4059-885f-133d0f2f57d1" containerName="mount-cgroup" Feb 9 18:56:51.783839 systemd[1]: Created slice kubepods-burstable-pod9901d2ec_8fca_4fc6_aed1_37feb4f68d01.slice. Feb 9 18:56:51.859922 kubelet[2540]: I0209 18:56:51.859889 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-lib-modules\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.859922 kubelet[2540]: I0209 18:56:51.859935 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-xtables-lock\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.859965 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-clustermesh-secrets\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.859997 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-cilium-ipsec-secrets\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.860023 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-hubble-tls\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.860053 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-cilium-run\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.860082 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-cilium-cgroup\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.860111 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-cilium-config-path\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.860140 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-host-proc-sys-net\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860164 kubelet[2540]: I0209 18:56:51.860168 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-bpf-maps\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860528 kubelet[2540]: I0209 18:56:51.860199 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-hostproc\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860528 kubelet[2540]: I0209 18:56:51.860229 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-cni-path\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860528 kubelet[2540]: I0209 18:56:51.860261 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-host-proc-sys-kernel\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860528 kubelet[2540]: I0209 18:56:51.860292 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fpf\" (UniqueName: \"kubernetes.io/projected/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-kube-api-access-q2fpf\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:51.860528 kubelet[2540]: I0209 18:56:51.860322 2540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9901d2ec-8fca-4fc6-aed1-37feb4f68d01-etc-cni-netd\") pod \"cilium-r5925\" (UID: \"9901d2ec-8fca-4fc6-aed1-37feb4f68d01\") " pod="kube-system/cilium-r5925" Feb 9 18:56:52.090594 env[1558]: time="2024-02-09T18:56:52.090545032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5925,Uid:9901d2ec-8fca-4fc6-aed1-37feb4f68d01,Namespace:kube-system,Attempt:0,}" Feb 9 18:56:52.110793 env[1558]: time="2024-02-09T18:56:52.110593858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:56:52.110793 env[1558]: time="2024-02-09T18:56:52.110641978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:56:52.110793 env[1558]: time="2024-02-09T18:56:52.110654540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:56:52.111129 env[1558]: time="2024-02-09T18:56:52.110824592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69 pid=4706 runtime=io.containerd.runc.v2 Feb 9 18:56:52.126894 systemd[1]: Started cri-containerd-476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69.scope. Feb 9 18:56:52.159079 env[1558]: time="2024-02-09T18:56:52.159028979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r5925,Uid:9901d2ec-8fca-4fc6-aed1-37feb4f68d01,Namespace:kube-system,Attempt:0,} returns sandbox id \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\"" Feb 9 18:56:52.162932 env[1558]: time="2024-02-09T18:56:52.162892230Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:56:52.187682 env[1558]: time="2024-02-09T18:56:52.187630204Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013\"" Feb 9 18:56:52.188808 env[1558]: time="2024-02-09T18:56:52.188731780Z" level=info msg="StartContainer for \"c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013\"" Feb 9 18:56:52.207765 systemd[1]: Started cri-containerd-c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013.scope. Feb 9 18:56:52.248240 env[1558]: time="2024-02-09T18:56:52.248193280Z" level=info msg="StartContainer for \"c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013\" returns successfully" Feb 9 18:56:52.285156 kubelet[2540]: I0209 18:56:52.285131 2540 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d09128a8-a647-4059-885f-133d0f2f57d1 path="/var/lib/kubelet/pods/d09128a8-a647-4059-885f-133d0f2f57d1/volumes" Feb 9 18:56:52.285628 systemd[1]: cri-containerd-c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013.scope: Deactivated successfully. Feb 9 18:56:52.363526 env[1558]: time="2024-02-09T18:56:52.363395100Z" level=info msg="shim disconnected" id=c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013 Feb 9 18:56:52.363941 env[1558]: time="2024-02-09T18:56:52.363905713Z" level=warning msg="cleaning up after shim disconnected" id=c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013 namespace=k8s.io Feb 9 18:56:52.364059 env[1558]: time="2024-02-09T18:56:52.364044730Z" level=info msg="cleaning up dead shim" Feb 9 18:56:52.376560 env[1558]: time="2024-02-09T18:56:52.376521745Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4788 runtime=io.containerd.runc.v2\n" Feb 9 18:56:52.732982 env[1558]: time="2024-02-09T18:56:52.732928445Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:56:52.754715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3573457647.mount: Deactivated successfully. Feb 9 18:56:52.769437 env[1558]: time="2024-02-09T18:56:52.768303289Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b\"" Feb 9 18:56:52.770394 env[1558]: time="2024-02-09T18:56:52.770349368Z" level=info msg="StartContainer for \"75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b\"" Feb 9 18:56:52.795862 systemd[1]: Started cri-containerd-75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b.scope. Feb 9 18:56:52.842094 env[1558]: time="2024-02-09T18:56:52.842040126Z" level=info msg="StartContainer for \"75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b\" returns successfully" Feb 9 18:56:52.864730 systemd[1]: cri-containerd-75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b.scope: Deactivated successfully. Feb 9 18:56:52.908932 env[1558]: time="2024-02-09T18:56:52.908883450Z" level=info msg="shim disconnected" id=75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b Feb 9 18:56:52.908932 env[1558]: time="2024-02-09T18:56:52.908931168Z" level=warning msg="cleaning up after shim disconnected" id=75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b namespace=k8s.io Feb 9 18:56:52.908932 env[1558]: time="2024-02-09T18:56:52.908943681Z" level=info msg="cleaning up dead shim" Feb 9 18:56:52.918785 env[1558]: time="2024-02-09T18:56:52.918737502Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4853 runtime=io.containerd.runc.v2\n" Feb 9 18:56:53.421178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b-rootfs.mount: Deactivated successfully. Feb 9 18:56:53.461359 kubelet[2540]: E0209 18:56:53.461327 2540 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:56:53.733534 kubelet[2540]: W0209 18:56:53.733193 2540 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd09128a8_a647_4059_885f_133d0f2f57d1.slice/cri-containerd-fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0.scope WatchSource:0}: container "fa5e83750458b96ccf3a67165a396123b1c40b7f22bdc12f6afec686a72034d0" in namespace "k8s.io": not found Feb 9 18:56:53.746102 env[1558]: time="2024-02-09T18:56:53.746056491Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:56:53.768256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015916866.mount: Deactivated successfully. Feb 9 18:56:53.778763 env[1558]: time="2024-02-09T18:56:53.778713726Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210\"" Feb 9 18:56:53.779507 env[1558]: time="2024-02-09T18:56:53.779416641Z" level=info msg="StartContainer for \"14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210\"" Feb 9 18:56:53.809962 systemd[1]: Started cri-containerd-14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210.scope. Feb 9 18:56:53.858888 env[1558]: time="2024-02-09T18:56:53.858834692Z" level=info msg="StartContainer for \"14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210\" returns successfully" Feb 9 18:56:53.865311 systemd[1]: cri-containerd-14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210.scope: Deactivated successfully. Feb 9 18:56:53.911778 env[1558]: time="2024-02-09T18:56:53.911726864Z" level=info msg="shim disconnected" id=14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210 Feb 9 18:56:53.911778 env[1558]: time="2024-02-09T18:56:53.911778538Z" level=warning msg="cleaning up after shim disconnected" id=14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210 namespace=k8s.io Feb 9 18:56:53.911778 env[1558]: time="2024-02-09T18:56:53.911838279Z" level=info msg="cleaning up dead shim" Feb 9 18:56:53.923581 env[1558]: time="2024-02-09T18:56:53.923476380Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4914 runtime=io.containerd.runc.v2\n" Feb 9 18:56:54.420846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210-rootfs.mount: Deactivated successfully. Feb 9 18:56:54.748313 env[1558]: time="2024-02-09T18:56:54.748267706Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:56:54.767950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879981169.mount: Deactivated successfully. Feb 9 18:56:54.780692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426939655.mount: Deactivated successfully. Feb 9 18:56:54.784357 env[1558]: time="2024-02-09T18:56:54.784266178Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390\"" Feb 9 18:56:54.789993 env[1558]: time="2024-02-09T18:56:54.789953961Z" level=info msg="StartContainer for \"c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390\"" Feb 9 18:56:54.814030 systemd[1]: Started cri-containerd-c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390.scope. Feb 9 18:56:54.850277 systemd[1]: cri-containerd-c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390.scope: Deactivated successfully. Feb 9 18:56:54.853659 env[1558]: time="2024-02-09T18:56:54.853605968Z" level=info msg="StartContainer for \"c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390\" returns successfully" Feb 9 18:56:54.897731 env[1558]: time="2024-02-09T18:56:54.897299256Z" level=info msg="shim disconnected" id=c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390 Feb 9 18:56:54.897731 env[1558]: time="2024-02-09T18:56:54.897729971Z" level=warning msg="cleaning up after shim disconnected" id=c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390 namespace=k8s.io Feb 9 18:56:54.898236 env[1558]: time="2024-02-09T18:56:54.897746898Z" level=info msg="cleaning up dead shim" Feb 9 18:56:54.912872 env[1558]: time="2024-02-09T18:56:54.912720090Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:56:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4969 runtime=io.containerd.runc.v2\n" Feb 9 18:56:55.756159 env[1558]: time="2024-02-09T18:56:55.756115172Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:56:55.780035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535255363.mount: Deactivated successfully. Feb 9 18:56:55.801659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190995564.mount: Deactivated successfully. Feb 9 18:56:55.806717 env[1558]: time="2024-02-09T18:56:55.806654568Z" level=info msg="CreateContainer within sandbox \"476ac455b5762fb4c8364dc5d94561bec79ff8b5ac27c4edb063c456ff5a7b69\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e\"" Feb 9 18:56:55.807446 env[1558]: time="2024-02-09T18:56:55.807415522Z" level=info msg="StartContainer for \"8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e\"" Feb 9 18:56:55.829755 systemd[1]: Started cri-containerd-8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e.scope. Feb 9 18:56:55.890762 env[1558]: time="2024-02-09T18:56:55.890675958Z" level=info msg="StartContainer for \"8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e\" returns successfully" Feb 9 18:56:56.724513 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 18:56:56.794348 kubelet[2540]: I0209 18:56:56.794310 2540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r5925" podStartSLOduration=5.794175031 podCreationTimestamp="2024-02-09 18:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:56:56.792526416 +0000 UTC m=+128.826255759" watchObservedRunningTime="2024-02-09 18:56:56.794175031 +0000 UTC m=+128.827904375" Feb 9 18:56:56.850200 kubelet[2540]: W0209 18:56:56.850156 2540 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9901d2ec_8fca_4fc6_aed1_37feb4f68d01.slice/cri-containerd-c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013.scope WatchSource:0}: task c71d158cb49945c675c9af0eade37d27510605a684f693e8228cadcf7127b013 not found: not found Feb 9 18:56:57.281771 kubelet[2540]: E0209 18:56:57.281721 2540 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5d78c9869d-bdhfb" podUID=af5236c7-8b01-45b8-a032-6623c700923e Feb 9 18:56:58.346053 systemd[1]: run-containerd-runc-k8s.io-8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e-runc.i036qU.mount: Deactivated successfully. Feb 9 18:56:59.959647 kubelet[2540]: W0209 18:56:59.959608 2540 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9901d2ec_8fca_4fc6_aed1_37feb4f68d01.slice/cri-containerd-75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b.scope WatchSource:0}: task 75c46ff494254f48709c5713519a1faa0cbfc913e0a0360ef2fe4563f5b8922b not found: not found Feb 9 18:57:00.022076 (udev-worker)[5535]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:57:00.024150 systemd-networkd[1374]: lxc_health: Link UP Feb 9 18:57:00.033964 systemd-networkd[1374]: lxc_health: Gained carrier Feb 9 18:57:00.034501 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:57:00.035262 (udev-worker)[5536]: Network interface NamePolicy= disabled on kernel command line. Feb 9 18:57:00.616658 systemd[1]: run-containerd-runc-k8s.io-8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e-runc.dxargx.mount: Deactivated successfully. Feb 9 18:57:01.336095 systemd-networkd[1374]: lxc_health: Gained IPv6LL Feb 9 18:57:03.084414 kubelet[2540]: W0209 18:57:03.084046 2540 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9901d2ec_8fca_4fc6_aed1_37feb4f68d01.slice/cri-containerd-14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210.scope WatchSource:0}: task 14ebc781b91b52fbaa68d51401980029e745ec1d4490f01553299b78cb59b210 not found: not found Feb 9 18:57:03.107558 systemd[1]: run-containerd-runc-k8s.io-8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e-runc.OHVHmt.mount: Deactivated successfully. Feb 9 18:57:05.376779 systemd[1]: run-containerd-runc-k8s.io-8fd90f542b0a9ca9a0eae324b75244068085aa56070bb387e499a8bbc160769e-runc.W0Nyd4.mount: Deactivated successfully. Feb 9 18:57:05.535019 sshd[4564]: pam_unix(sshd:session): session closed for user core Feb 9 18:57:05.539851 systemd[1]: sshd@26-172.31.24.123:22-139.178.68.195:34354.service: Deactivated successfully. Feb 9 18:57:05.540866 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 18:57:05.542063 systemd-logind[1550]: Session 27 logged out. Waiting for processes to exit. Feb 9 18:57:05.543965 systemd-logind[1550]: Removed session 27. Feb 9 18:57:06.197836 kubelet[2540]: W0209 18:57:06.197783 2540 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9901d2ec_8fca_4fc6_aed1_37feb4f68d01.slice/cri-containerd-c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390.scope WatchSource:0}: task c7a84c6a75f8da61ad60fc065dfc882e284793e5cc38f5ca370d9ee44f0f4390 not found: not found Feb 9 18:57:20.393555 systemd[1]: cri-containerd-83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4.scope: Deactivated successfully. Feb 9 18:57:20.393877 systemd[1]: cri-containerd-83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4.scope: Consumed 2.981s CPU time. Feb 9 18:57:20.419566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4-rootfs.mount: Deactivated successfully. Feb 9 18:57:20.445922 env[1558]: time="2024-02-09T18:57:20.445865502Z" level=info msg="shim disconnected" id=83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4 Feb 9 18:57:20.445922 env[1558]: time="2024-02-09T18:57:20.445918843Z" level=warning msg="cleaning up after shim disconnected" id=83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4 namespace=k8s.io Feb 9 18:57:20.446691 env[1558]: time="2024-02-09T18:57:20.445932018Z" level=info msg="cleaning up dead shim" Feb 9 18:57:20.455589 env[1558]: time="2024-02-09T18:57:20.455544537Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5651 runtime=io.containerd.runc.v2\n" Feb 9 18:57:20.818393 kubelet[2540]: I0209 18:57:20.818319 2540 scope.go:115] "RemoveContainer" containerID="83fc6be04d5865d6b4b1bedd88a2f4cc897319634e62f6862a96e9f92de5ecb4" Feb 9 18:57:20.821824 env[1558]: time="2024-02-09T18:57:20.821777748Z" level=info msg="CreateContainer within sandbox \"901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 18:57:20.848425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946100809.mount: Deactivated successfully. Feb 9 18:57:20.859520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495063146.mount: Deactivated successfully. Feb 9 18:57:20.865649 env[1558]: time="2024-02-09T18:57:20.865603074Z" level=info msg="CreateContainer within sandbox \"901f3a365e01bcb527a778d40e3e24f61a1b2f6525d2dc5768c118e7a721ce3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eb44bf813905a1790c832a430a1a42749d74c639d9a5eeba28cc952aea1dd625\"" Feb 9 18:57:20.866447 env[1558]: time="2024-02-09T18:57:20.866342603Z" level=info msg="StartContainer for \"eb44bf813905a1790c832a430a1a42749d74c639d9a5eeba28cc952aea1dd625\"" Feb 9 18:57:20.897784 systemd[1]: Started cri-containerd-eb44bf813905a1790c832a430a1a42749d74c639d9a5eeba28cc952aea1dd625.scope. Feb 9 18:57:20.971030 env[1558]: time="2024-02-09T18:57:20.970873188Z" level=info msg="StartContainer for \"eb44bf813905a1790c832a430a1a42749d74c639d9a5eeba28cc952aea1dd625\" returns successfully" Feb 9 18:57:20.987846 kubelet[2540]: E0209 18:57:20.987804 2540 controller.go:193] "Failed to update lease" err="Put \"https://172.31.24.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 18:57:24.532369 systemd[1]: cri-containerd-8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa.scope: Deactivated successfully. Feb 9 18:57:24.532933 systemd[1]: cri-containerd-8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa.scope: Consumed 1.539s CPU time. Feb 9 18:57:24.556107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa-rootfs.mount: Deactivated successfully. Feb 9 18:57:24.575537 env[1558]: time="2024-02-09T18:57:24.575453949Z" level=info msg="shim disconnected" id=8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa Feb 9 18:57:24.575537 env[1558]: time="2024-02-09T18:57:24.575532634Z" level=warning msg="cleaning up after shim disconnected" id=8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa namespace=k8s.io Feb 9 18:57:24.575537 env[1558]: time="2024-02-09T18:57:24.575545251Z" level=info msg="cleaning up dead shim" Feb 9 18:57:24.584123 env[1558]: time="2024-02-09T18:57:24.584082050Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:57:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5711 runtime=io.containerd.runc.v2\n" Feb 9 18:57:24.833120 kubelet[2540]: I0209 18:57:24.832403 2540 scope.go:115] "RemoveContainer" containerID="8a18918496a2631dbbb81ece83e659c107185a346ae9f34f0353484a388614fa" Feb 9 18:57:24.835337 env[1558]: time="2024-02-09T18:57:24.835292618Z" level=info msg="CreateContainer within sandbox \"57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 18:57:24.871710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258691296.mount: Deactivated successfully. Feb 9 18:57:24.879726 env[1558]: time="2024-02-09T18:57:24.879670034Z" level=info msg="CreateContainer within sandbox \"57f34bdeb591585f82a0ce55db5988ed668a86c5add1f3eaa51d3ca50cbb44ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5da7e0f46b3bd68da6f836d15e6359c7fd4b7f18797fe6ff508dc75139ec7905\"" Feb 9 18:57:24.880571 env[1558]: time="2024-02-09T18:57:24.880537757Z" level=info msg="StartContainer for \"5da7e0f46b3bd68da6f836d15e6359c7fd4b7f18797fe6ff508dc75139ec7905\"" Feb 9 18:57:24.903007 systemd[1]: Started cri-containerd-5da7e0f46b3bd68da6f836d15e6359c7fd4b7f18797fe6ff508dc75139ec7905.scope. Feb 9 18:57:24.963234 env[1558]: time="2024-02-09T18:57:24.963145578Z" level=info msg="StartContainer for \"5da7e0f46b3bd68da6f836d15e6359c7fd4b7f18797fe6ff508dc75139ec7905\" returns successfully" Feb 9 18:57:30.989080 kubelet[2540]: E0209 18:57:30.989015 2540 controller.go:193] "Failed to update lease" err="Put \"https://172.31.24.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"