Dec 13 02:19:59.183700 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:19:59.183724 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:19:59.183734 kernel: BIOS-provided physical RAM map: Dec 13 02:19:59.183740 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:19:59.183746 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:19:59.183752 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:19:59.183762 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:19:59.183768 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:19:59.183775 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:19:59.183781 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:19:59.186708 kernel: NX (Execute Disable) protection: active Dec 13 02:19:59.186716 kernel: SMBIOS 2.7 present. Dec 13 02:19:59.186723 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:19:59.186730 kernel: Hypervisor detected: KVM Dec 13 02:19:59.186743 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:19:59.186750 kernel: kvm-clock: cpu 0, msr 3419b001, primary cpu clock Dec 13 02:19:59.186758 kernel: kvm-clock: using sched offset of 7720174511 cycles Dec 13 02:19:59.186766 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:19:59.186773 kernel: tsc: Detected 2499.996 MHz processor Dec 13 02:19:59.186781 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:19:59.186801 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:19:59.186808 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:19:59.186816 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:19:59.186823 kernel: Using GB pages for direct mapping Dec 13 02:19:59.186830 kernel: ACPI: Early table checksum verification disabled Dec 13 02:19:59.186837 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:19:59.186845 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:19:59.186852 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:19:59.186859 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:19:59.186869 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:19:59.186876 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:19:59.186883 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:19:59.186890 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:19:59.186898 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:19:59.186905 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:19:59.186912 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:19:59.186919 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:19:59.186928 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:19:59.186936 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:19:59.186943 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:19:59.186954 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:19:59.186962 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:19:59.186969 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:19:59.186977 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:19:59.186987 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:19:59.186994 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:19:59.187002 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:19:59.187010 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:19:59.187017 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:19:59.187025 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:19:59.187033 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:19:59.187040 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:19:59.187050 kernel: Zone ranges: Dec 13 02:19:59.187058 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:19:59.187066 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:19:59.187073 kernel: Normal empty Dec 13 02:19:59.187081 kernel: Movable zone start for each node Dec 13 02:19:59.187089 kernel: Early memory node ranges Dec 13 02:19:59.187097 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:19:59.187105 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:19:59.187112 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:19:59.187122 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:19:59.187130 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:19:59.187138 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:19:59.187145 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:19:59.187153 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:19:59.187161 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:19:59.187169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:19:59.187176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:19:59.187184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:19:59.187194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:19:59.187201 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:19:59.187209 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:19:59.187216 kernel: TSC deadline timer available Dec 13 02:19:59.187224 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:19:59.187232 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:19:59.187239 kernel: Booting paravirtualized kernel on KVM Dec 13 02:19:59.187247 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:19:59.187255 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:19:59.187265 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:19:59.187273 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:19:59.187280 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:19:59.187288 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:19:59.187296 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:19:59.187303 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:19:59.187311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:19:59.187319 kernel: Policy zone: DMA32 Dec 13 02:19:59.187328 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:19:59.187339 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:19:59.187346 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:19:59.187354 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:19:59.187362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:19:59.187369 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:19:59.187377 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:19:59.187385 kernel: Kernel/User page tables isolation: enabled Dec 13 02:19:59.187392 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:19:59.187402 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:19:59.187410 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:19:59.187418 kernel: rcu: RCU event tracing is enabled. Dec 13 02:19:59.187426 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:19:59.187434 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:19:59.187442 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:19:59.187450 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:19:59.187457 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:19:59.187465 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:19:59.187474 kernel: random: crng init done Dec 13 02:19:59.187482 kernel: Console: colour VGA+ 80x25 Dec 13 02:19:59.187490 kernel: printk: console [ttyS0] enabled Dec 13 02:19:59.187498 kernel: ACPI: Core revision 20210730 Dec 13 02:19:59.187506 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:19:59.187513 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:19:59.187521 kernel: x2apic enabled Dec 13 02:19:59.187529 kernel: Switched APIC routing to physical x2apic. Dec 13 02:19:59.187536 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 02:19:59.187546 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 02:19:59.187554 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:19:59.187562 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:19:59.187569 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:19:59.187584 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:19:59.187594 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:19:59.187602 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:19:59.187610 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:19:59.187618 kernel: RETBleed: Vulnerable Dec 13 02:19:59.187626 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:19:59.187634 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:19:59.187642 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:19:59.187650 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:19:59.187658 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:19:59.187668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:19:59.187676 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:19:59.187684 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:19:59.187692 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:19:59.187700 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:19:59.187710 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:19:59.187718 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:19:59.187726 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:19:59.187734 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:19:59.187742 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:19:59.187750 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:19:59.187758 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:19:59.187766 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:19:59.187774 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:19:59.191807 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:19:59.191835 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:19:59.191845 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:19:59.191858 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:19:59.191866 kernel: LSM: Security Framework initializing Dec 13 02:19:59.191874 kernel: SELinux: Initializing. Dec 13 02:19:59.191882 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:19:59.191891 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:19:59.191899 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:19:59.191907 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:19:59.191916 kernel: signal: max sigframe size: 3632 Dec 13 02:19:59.191925 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:19:59.191933 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:19:59.191944 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:19:59.191952 kernel: x86: Booting SMP configuration: Dec 13 02:19:59.191960 kernel: .... node #0, CPUs: #1 Dec 13 02:19:59.191969 kernel: kvm-clock: cpu 1, msr 3419b041, secondary cpu clock Dec 13 02:19:59.191977 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:19:59.191987 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:19:59.191996 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:19:59.192004 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:19:59.192012 kernel: smpboot: Max logical packages: 1 Dec 13 02:19:59.192032 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 02:19:59.192041 kernel: devtmpfs: initialized Dec 13 02:19:59.192049 kernel: x86/mm: Memory block size: 128MB Dec 13 02:19:59.192057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:19:59.192066 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:19:59.192074 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:19:59.192082 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:19:59.192091 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:19:59.192099 kernel: audit: type=2000 audit(1734056397.814:1): state=initialized audit_enabled=0 res=1 Dec 13 02:19:59.192110 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:19:59.192118 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:19:59.192126 kernel: cpuidle: using governor menu Dec 13 02:19:59.192135 kernel: ACPI: bus type PCI registered Dec 13 02:19:59.192143 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:19:59.192151 kernel: dca service started, version 1.12.1 Dec 13 02:19:59.192159 kernel: PCI: Using configuration type 1 for base access Dec 13 02:19:59.192167 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:19:59.192176 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:19:59.192186 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:19:59.192194 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:19:59.192203 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:19:59.192211 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:19:59.192219 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:19:59.192227 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:19:59.192236 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:19:59.192244 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:19:59.192252 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:19:59.192262 kernel: ACPI: Interpreter enabled Dec 13 02:19:59.192270 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:19:59.192278 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:19:59.192287 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:19:59.192295 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:19:59.192303 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:19:59.192468 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:19:59.192555 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:19:59.192569 kernel: acpiphp: Slot [3] registered Dec 13 02:19:59.192577 kernel: acpiphp: Slot [4] registered Dec 13 02:19:59.192586 kernel: acpiphp: Slot [5] registered Dec 13 02:19:59.192594 kernel: acpiphp: Slot [6] registered Dec 13 02:19:59.192603 kernel: acpiphp: Slot [7] registered Dec 13 02:19:59.192611 kernel: acpiphp: Slot [8] registered Dec 13 02:19:59.192619 kernel: acpiphp: Slot [9] registered Dec 13 02:19:59.192627 kernel: acpiphp: Slot [10] registered Dec 13 02:19:59.192636 kernel: acpiphp: Slot [11] registered Dec 13 02:19:59.192647 kernel: acpiphp: Slot [12] registered Dec 13 02:19:59.192655 kernel: acpiphp: Slot [13] registered Dec 13 02:19:59.192663 kernel: acpiphp: Slot [14] registered Dec 13 02:19:59.192671 kernel: acpiphp: Slot [15] registered Dec 13 02:19:59.192679 kernel: acpiphp: Slot [16] registered Dec 13 02:19:59.192688 kernel: acpiphp: Slot [17] registered Dec 13 02:19:59.192696 kernel: acpiphp: Slot [18] registered Dec 13 02:19:59.192704 kernel: acpiphp: Slot [19] registered Dec 13 02:19:59.192712 kernel: acpiphp: Slot [20] registered Dec 13 02:19:59.192722 kernel: acpiphp: Slot [21] registered Dec 13 02:19:59.192730 kernel: acpiphp: Slot [22] registered Dec 13 02:19:59.192738 kernel: acpiphp: Slot [23] registered Dec 13 02:19:59.192746 kernel: acpiphp: Slot [24] registered Dec 13 02:19:59.192754 kernel: acpiphp: Slot [25] registered Dec 13 02:19:59.192762 kernel: acpiphp: Slot [26] registered Dec 13 02:19:59.192770 kernel: acpiphp: Slot [27] registered Dec 13 02:19:59.192778 kernel: acpiphp: Slot [28] registered Dec 13 02:19:59.192806 kernel: acpiphp: Slot [29] registered Dec 13 02:19:59.192814 kernel: acpiphp: Slot [30] registered Dec 13 02:19:59.192824 kernel: acpiphp: Slot [31] registered Dec 13 02:19:59.192832 kernel: PCI host bridge to bus 0000:00 Dec 13 02:19:59.192926 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:19:59.192999 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:19:59.193072 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:19:59.193143 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:19:59.193265 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:19:59.193365 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:19:59.193456 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:19:59.193543 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:19:59.193624 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:19:59.193705 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:19:59.193861 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:19:59.193955 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:19:59.194039 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:19:59.194119 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:19:59.194199 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:19:59.194278 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:19:59.194364 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:19:59.194444 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:19:59.194523 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:19:59.194605 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:19:59.194689 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:19:59.194768 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:19:59.194864 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:19:59.194944 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:19:59.194955 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:19:59.194966 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:19:59.194975 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:19:59.194983 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:19:59.194991 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:19:59.194999 kernel: iommu: Default domain type: Translated Dec 13 02:19:59.195008 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:19:59.195086 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:19:59.195166 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:19:59.195244 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:19:59.195257 kernel: vgaarb: loaded Dec 13 02:19:59.195266 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:19:59.195274 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:19:59.195283 kernel: PTP clock support registered Dec 13 02:19:59.195291 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:19:59.195299 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:19:59.195308 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:19:59.195317 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:19:59.195327 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:19:59.195335 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:19:59.195343 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:19:59.195351 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:19:59.195360 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:19:59.195368 kernel: pnp: PnP ACPI init Dec 13 02:19:59.195376 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:19:59.195385 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:19:59.195393 kernel: NET: Registered PF_INET protocol family Dec 13 02:19:59.195403 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:19:59.195412 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:19:59.195420 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:19:59.195428 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:19:59.195437 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:19:59.195445 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:19:59.195453 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:19:59.195461 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:19:59.195470 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:19:59.195480 kernel: NET: Registered PF_XDP protocol family Dec 13 02:19:59.195554 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:19:59.195630 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:19:59.195701 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:19:59.195772 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:19:59.195863 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:19:59.195944 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:19:59.195958 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:19:59.195966 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:19:59.195975 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 02:19:59.195983 kernel: clocksource: Switched to clocksource tsc Dec 13 02:19:59.195991 kernel: Initialise system trusted keyrings Dec 13 02:19:59.196000 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:19:59.196008 kernel: Key type asymmetric registered Dec 13 02:19:59.196016 kernel: Asymmetric key parser 'x509' registered Dec 13 02:19:59.196033 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:19:59.196044 kernel: io scheduler mq-deadline registered Dec 13 02:19:59.196052 kernel: io scheduler kyber registered Dec 13 02:19:59.196060 kernel: io scheduler bfq registered Dec 13 02:19:59.196068 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:19:59.196077 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:19:59.196085 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:19:59.196093 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:19:59.196101 kernel: i8042: Warning: Keylock active Dec 13 02:19:59.196109 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:19:59.196120 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:19:59.196209 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:19:59.196286 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:19:59.196360 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:19:58 UTC (1734056398) Dec 13 02:19:59.196434 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:19:59.196444 kernel: intel_pstate: CPU model not supported Dec 13 02:19:59.196453 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:19:59.196461 kernel: Segment Routing with IPv6 Dec 13 02:19:59.196471 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:19:59.196480 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:19:59.196488 kernel: Key type dns_resolver registered Dec 13 02:19:59.196496 kernel: IPI shorthand broadcast: enabled Dec 13 02:19:59.196504 kernel: sched_clock: Marking stable (417131013, 276595965)->(850078220, -156351242) Dec 13 02:19:59.196512 kernel: registered taskstats version 1 Dec 13 02:19:59.196521 kernel: Loading compiled-in X.509 certificates Dec 13 02:19:59.196529 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:19:59.196537 kernel: Key type .fscrypt registered Dec 13 02:19:59.196547 kernel: Key type fscrypt-provisioning registered Dec 13 02:19:59.196556 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:19:59.196564 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:19:59.196572 kernel: ima: No architecture policies found Dec 13 02:19:59.196580 kernel: clk: Disabling unused clocks Dec 13 02:19:59.196588 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:19:59.196596 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:19:59.196605 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:19:59.196613 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:19:59.196624 kernel: Run /init as init process Dec 13 02:19:59.196632 kernel: with arguments: Dec 13 02:19:59.196640 kernel: /init Dec 13 02:19:59.196648 kernel: with environment: Dec 13 02:19:59.196656 kernel: HOME=/ Dec 13 02:19:59.196663 kernel: TERM=linux Dec 13 02:19:59.196671 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:19:59.196682 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:19:59.196695 systemd[1]: Detected virtualization amazon. Dec 13 02:19:59.196704 systemd[1]: Detected architecture x86-64. Dec 13 02:19:59.196712 systemd[1]: Running in initrd. Dec 13 02:19:59.196721 systemd[1]: No hostname configured, using default hostname. Dec 13 02:19:59.196741 systemd[1]: Hostname set to . Dec 13 02:19:59.196752 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:19:59.196763 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:19:59.196772 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:19:59.196781 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:19:59.206858 systemd[1]: Reached target cryptsetup.target. Dec 13 02:19:59.206871 systemd[1]: Reached target paths.target. Dec 13 02:19:59.206880 systemd[1]: Reached target slices.target. Dec 13 02:19:59.206889 systemd[1]: Reached target swap.target. Dec 13 02:19:59.206898 systemd[1]: Reached target timers.target. Dec 13 02:19:59.206913 systemd[1]: Listening on iscsid.socket. Dec 13 02:19:59.206922 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:19:59.206931 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:19:59.206940 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:19:59.206949 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:19:59.206959 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:19:59.206968 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:19:59.206978 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:19:59.206989 systemd[1]: Reached target sockets.target. Dec 13 02:19:59.206998 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:19:59.207007 systemd[1]: Finished network-cleanup.service. Dec 13 02:19:59.207018 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:19:59.207027 systemd[1]: Starting systemd-journald.service... Dec 13 02:19:59.207036 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:19:59.207046 systemd[1]: Starting systemd-resolved.service... Dec 13 02:19:59.207055 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:19:59.207064 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:19:59.207075 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:19:59.207084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:19:59.207093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:19:59.207111 systemd-journald[185]: Journal started Dec 13 02:19:59.207176 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2df78ff6224b218282a46346d8b4b8) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:19:59.183864 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:19:59.308284 systemd[1]: Started systemd-journald.service. Dec 13 02:19:59.308323 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:19:59.308344 kernel: Bridge firewalling registered Dec 13 02:19:59.308404 kernel: SCSI subsystem initialized Dec 13 02:19:59.308419 kernel: audit: type=1130 audit(1734056399.287:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.308441 kernel: audit: type=1130 audit(1734056399.287:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.308461 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:19:59.308480 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:19:59.308498 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:19:59.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.215610 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:19:59.215625 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:19:59.215660 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:19:59.225514 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:19:59.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.323806 kernel: audit: type=1130 audit(1734056399.307:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.246148 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:19:59.288729 systemd[1]: Started systemd-resolved.service. Dec 13 02:19:59.289701 systemd[1]: Reached target nss-lookup.target. Dec 13 02:19:59.295316 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:19:59.319093 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:19:59.329158 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:19:59.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.331470 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:19:59.342620 kernel: audit: type=1130 audit(1734056399.331:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.336149 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:19:59.358857 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:19:59.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.365815 kernel: audit: type=1130 audit(1734056399.357:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.370359 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:19:59.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.376804 kernel: audit: type=1130 audit(1734056399.370:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.373056 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:19:59.385973 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:19:59.388706 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:19:59.461815 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:19:59.480814 kernel: iscsi: registered transport (tcp) Dec 13 02:19:59.517831 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:19:59.517904 kernel: QLogic iSCSI HBA Driver Dec 13 02:19:59.565030 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:19:59.566371 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:19:59.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.573467 kernel: audit: type=1130 audit(1734056399.563:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:19:59.622828 kernel: raid6: avx512x4 gen() 15748 MB/s Dec 13 02:19:59.639824 kernel: raid6: avx512x4 xor() 7609 MB/s Dec 13 02:19:59.656828 kernel: raid6: avx512x2 gen() 16583 MB/s Dec 13 02:19:59.674835 kernel: raid6: avx512x2 xor() 22701 MB/s Dec 13 02:19:59.692863 kernel: raid6: avx512x1 gen() 16973 MB/s Dec 13 02:19:59.709823 kernel: raid6: avx512x1 xor() 19754 MB/s Dec 13 02:19:59.726835 kernel: raid6: avx2x4 gen() 14538 MB/s Dec 13 02:19:59.743835 kernel: raid6: avx2x4 xor() 6025 MB/s Dec 13 02:19:59.760841 kernel: raid6: avx2x2 gen() 11375 MB/s Dec 13 02:19:59.777839 kernel: raid6: avx2x2 xor() 15173 MB/s Dec 13 02:19:59.794841 kernel: raid6: avx2x1 gen() 11128 MB/s Dec 13 02:19:59.812863 kernel: raid6: avx2x1 xor() 11633 MB/s Dec 13 02:19:59.829862 kernel: raid6: sse2x4 gen() 8176 MB/s Dec 13 02:19:59.846822 kernel: raid6: sse2x4 xor() 5485 MB/s Dec 13 02:19:59.863836 kernel: raid6: sse2x2 gen() 8493 MB/s Dec 13 02:19:59.880839 kernel: raid6: sse2x2 xor() 5520 MB/s Dec 13 02:19:59.897833 kernel: raid6: sse2x1 gen() 6722 MB/s Dec 13 02:19:59.916631 kernel: raid6: sse2x1 xor() 3895 MB/s Dec 13 02:19:59.916721 kernel: raid6: using algorithm avx512x1 gen() 16973 MB/s Dec 13 02:19:59.916739 kernel: raid6: .... xor() 19754 MB/s, rmw enabled Dec 13 02:19:59.917516 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:19:59.961756 kernel: xor: automatically using best checksumming function avx Dec 13 02:20:00.140823 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:20:00.166551 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:20:00.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:00.171470 systemd[1]: Starting systemd-udevd.service... Dec 13 02:20:00.178431 kernel: audit: type=1130 audit(1734056400.166:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:00.178514 kernel: audit: type=1334 audit(1734056400.168:10): prog-id=7 op=LOAD Dec 13 02:20:00.168000 audit: BPF prog-id=7 op=LOAD Dec 13 02:20:00.168000 audit: BPF prog-id=8 op=LOAD Dec 13 02:20:00.204290 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 02:20:00.217186 systemd[1]: Started systemd-udevd.service. Dec 13 02:20:00.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:00.221276 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:20:00.250201 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Dec 13 02:20:00.318772 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:20:00.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:00.323935 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:20:00.411815 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:20:00.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:00.538810 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:20:00.555203 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:20:00.555264 kernel: AES CTR mode by8 optimization enabled Dec 13 02:20:00.574807 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:20:00.587119 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:20:00.587277 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:20:00.587444 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:20:00.587462 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:20:00.587586 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:20:00.587705 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a5:98:ac:1b:3f Dec 13 02:20:00.591811 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:20:00.591874 kernel: GPT:9289727 != 16777215 Dec 13 02:20:00.591892 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:20:00.591909 kernel: GPT:9289727 != 16777215 Dec 13 02:20:00.591930 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:20:00.591946 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:20:00.596153 (udev-worker)[431]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:00.724645 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (439) Dec 13 02:20:00.759943 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:20:00.796187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:20:00.810500 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:20:00.815448 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:20:00.818717 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:20:00.822051 systemd[1]: Starting disk-uuid.service... Dec 13 02:20:00.834821 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:20:00.835115 disk-uuid[593]: Primary Header is updated. Dec 13 02:20:00.835115 disk-uuid[593]: Secondary Entries is updated. Dec 13 02:20:00.835115 disk-uuid[593]: Secondary Header is updated. Dec 13 02:20:01.887824 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:20:01.888252 disk-uuid[594]: The operation has completed successfully. Dec 13 02:20:02.228144 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:20:02.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.228260 systemd[1]: Finished disk-uuid.service. Dec 13 02:20:02.244989 systemd[1]: Starting verity-setup.service... Dec 13 02:20:02.282825 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:20:02.444867 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:20:02.454827 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:20:02.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.464500 systemd[1]: Finished verity-setup.service. Dec 13 02:20:02.614460 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:20:02.615170 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:20:02.617186 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:20:02.624270 systemd[1]: Starting ignition-setup.service... Dec 13 02:20:02.631478 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:20:02.682480 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:20:02.682563 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:20:02.682584 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:20:02.701811 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:20:02.727711 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:20:02.742902 systemd[1]: Finished ignition-setup.service. Dec 13 02:20:02.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.744836 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:20:02.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.804586 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:20:02.806000 audit: BPF prog-id=9 op=LOAD Dec 13 02:20:02.807658 systemd[1]: Starting systemd-networkd.service... Dec 13 02:20:02.837396 systemd-networkd[939]: lo: Link UP Dec 13 02:20:02.837408 systemd-networkd[939]: lo: Gained carrier Dec 13 02:20:02.839307 systemd-networkd[939]: Enumeration completed Dec 13 02:20:02.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.839568 systemd[1]: Started systemd-networkd.service. Dec 13 02:20:02.840691 systemd[1]: Reached target network.target. Dec 13 02:20:02.842204 systemd-networkd[939]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:20:02.842851 systemd[1]: Starting iscsiuio.service... Dec 13 02:20:02.869462 systemd[1]: Started iscsiuio.service. Dec 13 02:20:02.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.876959 systemd-networkd[939]: eth0: Link UP Dec 13 02:20:02.876966 systemd-networkd[939]: eth0: Gained carrier Dec 13 02:20:02.877565 systemd[1]: Starting iscsid.service... Dec 13 02:20:02.887709 iscsid[944]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:20:02.887709 iscsid[944]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:20:02.887709 iscsid[944]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:20:02.887709 iscsid[944]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:20:02.887709 iscsid[944]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:20:02.899623 iscsid[944]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:20:02.902872 systemd[1]: Started iscsid.service. Dec 13 02:20:02.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.905874 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:20:02.912090 systemd-networkd[939]: eth0: DHCPv4 address 172.31.22.41/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:20:02.925950 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:20:02.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:02.936840 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:20:02.936950 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:20:02.937116 systemd[1]: Reached target remote-fs.target. Dec 13 02:20:02.938415 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:20:02.966634 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:20:02.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.324064 ignition[897]: Ignition 2.14.0 Dec 13 02:20:03.324081 ignition[897]: Stage: fetch-offline Dec 13 02:20:03.324233 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:03.324275 ignition[897]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:03.344408 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:03.346356 ignition[897]: Ignition finished successfully Dec 13 02:20:03.348714 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:20:03.361636 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 13 02:20:03.361686 kernel: audit: type=1130 audit(1734056403.348:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.351187 systemd[1]: Starting ignition-fetch.service... Dec 13 02:20:03.377545 ignition[963]: Ignition 2.14.0 Dec 13 02:20:03.377560 ignition[963]: Stage: fetch Dec 13 02:20:03.377774 ignition[963]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:03.377834 ignition[963]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:03.388081 ignition[963]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:03.389936 ignition[963]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:03.400243 ignition[963]: INFO : PUT result: OK Dec 13 02:20:03.402820 ignition[963]: DEBUG : parsed url from cmdline: "" Dec 13 02:20:03.402820 ignition[963]: INFO : no config URL provided Dec 13 02:20:03.402820 ignition[963]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:20:03.406844 ignition[963]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:20:03.406844 ignition[963]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:03.406844 ignition[963]: INFO : PUT result: OK Dec 13 02:20:03.406844 ignition[963]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:20:03.406844 ignition[963]: INFO : GET result: OK Dec 13 02:20:03.413593 ignition[963]: DEBUG : parsing config with SHA512: 8755f2f660faa539ecb691c52b7a5b816e703b2913149dfdd9e92def9d361508dd799d1ca8f7af83ca4d38844e71f67135d32184c5d40ccee66a1668f7fc9359 Dec 13 02:20:03.418516 unknown[963]: fetched base config from "system" Dec 13 02:20:03.418592 unknown[963]: fetched base config from "system" Dec 13 02:20:03.418602 unknown[963]: fetched user config from "aws" Dec 13 02:20:03.422103 ignition[963]: fetch: fetch complete Dec 13 02:20:03.422115 ignition[963]: fetch: fetch passed Dec 13 02:20:03.422178 ignition[963]: Ignition finished successfully Dec 13 02:20:03.426945 systemd[1]: Finished ignition-fetch.service. Dec 13 02:20:03.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.428879 systemd[1]: Starting ignition-kargs.service... Dec 13 02:20:03.434700 kernel: audit: type=1130 audit(1734056403.425:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.447923 ignition[969]: Ignition 2.14.0 Dec 13 02:20:03.447937 ignition[969]: Stage: kargs Dec 13 02:20:03.448306 ignition[969]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:03.448345 ignition[969]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:03.476915 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:03.478836 ignition[969]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:03.481508 ignition[969]: INFO : PUT result: OK Dec 13 02:20:03.487570 ignition[969]: kargs: kargs passed Dec 13 02:20:03.487652 ignition[969]: Ignition finished successfully Dec 13 02:20:03.496532 systemd[1]: Finished ignition-kargs.service. Dec 13 02:20:03.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.506863 kernel: audit: type=1130 audit(1734056403.499:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.501631 systemd[1]: Starting ignition-disks.service... Dec 13 02:20:03.517140 ignition[975]: Ignition 2.14.0 Dec 13 02:20:03.517153 ignition[975]: Stage: disks Dec 13 02:20:03.517689 ignition[975]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:03.517732 ignition[975]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:03.527470 ignition[975]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:03.528920 ignition[975]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:03.530829 ignition[975]: INFO : PUT result: OK Dec 13 02:20:03.534321 ignition[975]: disks: disks passed Dec 13 02:20:03.534384 ignition[975]: Ignition finished successfully Dec 13 02:20:03.536833 systemd[1]: Finished ignition-disks.service. Dec 13 02:20:03.537069 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:20:03.544734 kernel: audit: type=1130 audit(1734056403.535:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.539209 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:20:03.544741 systemd[1]: Reached target local-fs.target. Dec 13 02:20:03.545573 systemd[1]: Reached target sysinit.target. Dec 13 02:20:03.545635 systemd[1]: Reached target basic.target. Dec 13 02:20:03.550043 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:20:03.581481 systemd-fsck[983]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:20:03.585900 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:20:03.595605 kernel: audit: type=1130 audit(1734056403.585:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.588184 systemd[1]: Mounting sysroot.mount... Dec 13 02:20:03.613808 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:20:03.615561 systemd[1]: Mounted sysroot.mount. Dec 13 02:20:03.617490 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:20:03.628486 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:20:03.632219 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:20:03.633882 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:20:03.637665 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:20:03.640934 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:20:03.660121 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:20:03.666919 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:20:03.682683 initrd-setup-root[1005]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:20:03.703265 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1000) Dec 13 02:20:03.703431 initrd-setup-root[1013]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:20:03.708798 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:20:03.708904 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:20:03.708922 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:20:03.712835 initrd-setup-root[1037]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:20:03.719504 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:20:03.719574 initrd-setup-root[1047]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:20:03.727620 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:20:03.908545 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:20:03.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.915827 kernel: audit: type=1130 audit(1734056403.908:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.910871 systemd[1]: Starting ignition-mount.service... Dec 13 02:20:03.919194 systemd[1]: Starting sysroot-boot.service... Dec 13 02:20:03.925212 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:20:03.925350 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:20:03.947650 ignition[1065]: INFO : Ignition 2.14.0 Dec 13 02:20:03.952629 ignition[1065]: INFO : Stage: mount Dec 13 02:20:03.954739 ignition[1065]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:03.956671 ignition[1065]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:03.972361 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:03.974712 ignition[1065]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:03.977077 ignition[1065]: INFO : PUT result: OK Dec 13 02:20:03.981553 systemd[1]: Finished sysroot-boot.service. Dec 13 02:20:03.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.987850 kernel: audit: type=1130 audit(1734056403.982:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:03.990356 ignition[1065]: INFO : mount: mount passed Dec 13 02:20:03.991409 ignition[1065]: INFO : Ignition finished successfully Dec 13 02:20:03.994233 systemd[1]: Finished ignition-mount.service. Dec 13 02:20:03.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:04.012603 kernel: audit: type=1130 audit(1734056403.998:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:04.001294 systemd[1]: Starting ignition-files.service... Dec 13 02:20:04.027947 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:20:04.048828 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1075) Dec 13 02:20:04.048888 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:20:04.051654 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:20:04.051722 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:20:04.058808 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:20:04.063374 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:20:04.076825 ignition[1094]: INFO : Ignition 2.14.0 Dec 13 02:20:04.076825 ignition[1094]: INFO : Stage: files Dec 13 02:20:04.079537 ignition[1094]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:04.079537 ignition[1094]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:04.104472 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:04.105890 ignition[1094]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:04.107342 ignition[1094]: INFO : PUT result: OK Dec 13 02:20:04.112909 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:20:04.134666 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:20:04.146038 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:20:04.159809 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:20:04.163135 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:20:04.169464 unknown[1094]: wrote ssh authorized keys file for user: core Dec 13 02:20:04.172146 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:20:04.179988 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:20:04.181839 ignition[1094]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:20:04.192819 ignition[1094]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2322283413" Dec 13 02:20:04.197059 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1097) Dec 13 02:20:04.197087 ignition[1094]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2322283413": device or resource busy Dec 13 02:20:04.197087 ignition[1094]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2322283413", trying btrfs: device or resource busy Dec 13 02:20:04.197087 ignition[1094]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2322283413" Dec 13 02:20:04.197087 ignition[1094]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2322283413" Dec 13 02:20:04.205104 ignition[1094]: INFO : op(3): [started] unmounting "/mnt/oem2322283413" Dec 13 02:20:04.207411 ignition[1094]: INFO : op(3): [finished] unmounting "/mnt/oem2322283413" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:20:04.207411 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:20:04.207411 ignition[1094]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:20:04.248654 ignition[1094]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4059114811" Dec 13 02:20:04.248654 ignition[1094]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4059114811": device or resource busy Dec 13 02:20:04.248654 ignition[1094]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4059114811", trying btrfs: device or resource busy Dec 13 02:20:04.248654 ignition[1094]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4059114811" Dec 13 02:20:04.257995 ignition[1094]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4059114811" Dec 13 02:20:04.257995 ignition[1094]: INFO : op(6): [started] unmounting "/mnt/oem4059114811" Dec 13 02:20:04.257995 ignition[1094]: INFO : op(6): [finished] unmounting "/mnt/oem4059114811" Dec 13 02:20:04.257995 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:20:04.257995 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:20:04.257995 ignition[1094]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:20:04.289561 ignition[1094]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3179637187" Dec 13 02:20:04.289561 ignition[1094]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3179637187": device or resource busy Dec 13 02:20:04.296130 ignition[1094]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3179637187", trying btrfs: device or resource busy Dec 13 02:20:04.296130 ignition[1094]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3179637187" Dec 13 02:20:04.296130 ignition[1094]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3179637187" Dec 13 02:20:04.296130 ignition[1094]: INFO : op(9): [started] unmounting "/mnt/oem3179637187" Dec 13 02:20:04.296130 ignition[1094]: INFO : op(9): [finished] unmounting "/mnt/oem3179637187" Dec 13 02:20:04.296130 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:20:04.308419 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:20:04.308419 ignition[1094]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:20:04.342986 ignition[1094]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1913857629" Dec 13 02:20:04.342986 ignition[1094]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1913857629": device or resource busy Dec 13 02:20:04.342986 ignition[1094]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1913857629", trying btrfs: device or resource busy Dec 13 02:20:04.342986 ignition[1094]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1913857629" Dec 13 02:20:04.342986 ignition[1094]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1913857629" Dec 13 02:20:04.342986 ignition[1094]: INFO : op(c): [started] unmounting "/mnt/oem1913857629" Dec 13 02:20:04.342986 ignition[1094]: INFO : op(c): [finished] unmounting "/mnt/oem1913857629" Dec 13 02:20:04.342986 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:20:04.342986 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:20:04.342986 ignition[1094]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:20:04.699963 systemd-networkd[939]: eth0: Gained IPv6LL Dec 13 02:20:04.999742 ignition[1094]: INFO : GET result: OK Dec 13 02:20:05.429058 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:20:05.429058 ignition[1094]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:20:05.429058 ignition[1094]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:20:05.429058 ignition[1094]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(e): [started] processing unit "nvidia.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(e): [finished] processing unit "nvidia.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(10): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(10): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(11): [started] setting preset to enabled for "nvidia.service" Dec 13 02:20:05.442367 ignition[1094]: INFO : files: op(11): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:20:05.491450 ignition[1094]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:20:05.491450 ignition[1094]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:20:05.491450 ignition[1094]: INFO : files: files passed Dec 13 02:20:05.491450 ignition[1094]: INFO : Ignition finished successfully Dec 13 02:20:05.511389 kernel: audit: type=1130 audit(1734056405.501:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.485365 systemd[1]: Finished ignition-files.service. Dec 13 02:20:05.516084 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:20:05.517411 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:20:05.518891 systemd[1]: Starting ignition-quench.service... Dec 13 02:20:05.524671 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:20:05.524814 systemd[1]: Finished ignition-quench.service. Dec 13 02:20:05.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.535876 kernel: audit: type=1130 audit(1734056405.528:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.540449 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:20:05.543555 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:20:05.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.546275 systemd[1]: Reached target ignition-complete.target. Dec 13 02:20:05.548669 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:20:05.573598 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:20:05.576756 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:20:05.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.579014 systemd[1]: Reached target initrd-fs.target. Dec 13 02:20:05.580815 systemd[1]: Reached target initrd.target. Dec 13 02:20:05.582420 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:20:05.583654 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:20:05.608519 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:20:05.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.610751 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:20:05.644501 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:20:05.651105 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:20:05.656262 systemd[1]: Stopped target timers.target. Dec 13 02:20:05.658675 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:20:05.662442 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:20:05.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.670607 systemd[1]: Stopped target initrd.target. Dec 13 02:20:05.673614 systemd[1]: Stopped target basic.target. Dec 13 02:20:05.674477 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:20:05.676810 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:20:05.679230 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:20:05.682543 systemd[1]: Stopped target remote-fs.target. Dec 13 02:20:05.684489 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:20:05.684664 systemd[1]: Stopped target sysinit.target. Dec 13 02:20:05.688888 systemd[1]: Stopped target local-fs.target. Dec 13 02:20:05.691883 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:20:05.693522 systemd[1]: Stopped target swap.target. Dec 13 02:20:05.695853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:20:05.697055 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:20:05.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.712167 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:20:05.714490 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:20:05.714616 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:20:05.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.717453 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:20:05.719774 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:20:05.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.724138 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:20:05.726142 systemd[1]: Stopped ignition-files.service. Dec 13 02:20:05.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.730452 systemd[1]: Stopping ignition-mount.service... Dec 13 02:20:05.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.751027 iscsid[944]: iscsid shutting down. Dec 13 02:20:05.737213 systemd[1]: Stopping iscsid.service... Dec 13 02:20:05.739509 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:20:05.739770 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:20:05.755064 ignition[1132]: INFO : Ignition 2.14.0 Dec 13 02:20:05.755064 ignition[1132]: INFO : Stage: umount Dec 13 02:20:05.744538 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:20:05.757805 ignition[1132]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:20:05.757805 ignition[1132]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:20:05.747672 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:20:05.747924 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:20:05.749285 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:20:05.750432 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:20:05.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.775252 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:20:05.775391 systemd[1]: Stopped iscsid.service. Dec 13 02:20:05.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.780300 ignition[1132]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:20:05.781623 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:20:05.784608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:20:05.788050 ignition[1132]: INFO : PUT result: OK Dec 13 02:20:05.788906 systemd[1]: Stopping iscsiuio.service... Dec 13 02:20:05.792385 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:20:05.793700 ignition[1132]: INFO : umount: umount passed Dec 13 02:20:05.793700 ignition[1132]: INFO : Ignition finished successfully Dec 13 02:20:05.796080 systemd[1]: Stopped iscsiuio.service. Dec 13 02:20:05.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.797800 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:20:05.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.797879 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:20:05.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.799416 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:20:05.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.799504 systemd[1]: Stopped ignition-mount.service. Dec 13 02:20:05.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.801281 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:20:05.801452 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:20:05.803592 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:20:05.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.803638 systemd[1]: Stopped ignition-disks.service. Dec 13 02:20:05.806744 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:20:05.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.806842 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:20:05.813590 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:20:05.814496 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:20:05.818608 systemd[1]: Stopped target network.target. Dec 13 02:20:05.822556 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:20:05.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.822652 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:20:05.826468 systemd[1]: Stopped target paths.target. Dec 13 02:20:05.828467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:20:05.830876 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:20:05.833697 systemd[1]: Stopped target slices.target. Dec 13 02:20:05.835237 systemd[1]: Stopped target sockets.target. Dec 13 02:20:05.836816 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:20:05.836910 systemd[1]: Closed iscsid.socket. Dec 13 02:20:05.838446 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:20:05.839109 systemd[1]: Closed iscsiuio.socket. Dec 13 02:20:05.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.839850 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:20:05.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.839897 systemd[1]: Stopped ignition-setup.service. Dec 13 02:20:05.841954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:20:05.842639 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:20:05.844600 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:20:05.848169 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:20:05.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.851351 systemd-networkd[939]: eth0: DHCPv6 lease lost Dec 13 02:20:05.853852 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:20:05.853945 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:20:05.858508 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:20:05.859747 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:20:05.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.860000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:20:05.862508 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:20:05.862564 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:20:05.863000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:20:05.866467 systemd[1]: Stopping network-cleanup.service... Dec 13 02:20:05.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.867653 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:20:05.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.867730 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:20:05.869567 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:20:05.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.869628 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:20:05.873456 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:20:05.873525 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:20:05.876877 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:20:05.886496 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:20:05.912090 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:20:05.913335 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:20:05.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.916399 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:20:05.916528 systemd[1]: Stopped network-cleanup.service. Dec 13 02:20:05.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.918676 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:20:05.918730 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:20:05.920579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:20:05.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.920861 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:20:05.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.923429 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:20:05.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.923502 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:20:05.926968 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:20:05.927030 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:20:05.929535 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:20:05.929583 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:20:05.933733 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:20:05.944732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:20:05.945349 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:20:05.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.949966 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:20:05.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:05.950063 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:20:05.955514 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:20:05.958246 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:20:05.971087 systemd[1]: Switching root. Dec 13 02:20:05.994631 systemd-journald[185]: Journal stopped Dec 13 02:20:11.368117 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:20:11.369197 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:20:11.369224 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:20:11.369242 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:20:11.369267 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:20:11.369286 kernel: SELinux: policy capability open_perms=1 Dec 13 02:20:11.369306 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:20:11.369323 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:20:11.369342 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:20:11.369364 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:20:11.369383 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:20:11.369406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:20:11.369425 systemd[1]: Successfully loaded SELinux policy in 109.137ms. Dec 13 02:20:11.369456 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.529ms. Dec 13 02:20:11.369477 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:20:11.369495 systemd[1]: Detected virtualization amazon. Dec 13 02:20:11.369513 systemd[1]: Detected architecture x86-64. Dec 13 02:20:11.369534 systemd[1]: Detected first boot. Dec 13 02:20:11.369552 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:20:11.369569 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:20:11.369587 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:20:11.369610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:20:11.369630 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:20:11.369650 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:20:11.369671 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 02:20:11.369687 kernel: audit: type=1334 audit(1734056411.034:85): prog-id=12 op=LOAD Dec 13 02:20:11.369705 kernel: audit: type=1334 audit(1734056411.034:86): prog-id=3 op=UNLOAD Dec 13 02:20:11.369720 kernel: audit: type=1334 audit(1734056411.035:87): prog-id=13 op=LOAD Dec 13 02:20:11.369737 kernel: audit: type=1334 audit(1734056411.036:88): prog-id=14 op=LOAD Dec 13 02:20:11.369753 kernel: audit: type=1334 audit(1734056411.036:89): prog-id=4 op=UNLOAD Dec 13 02:20:11.369770 kernel: audit: type=1334 audit(1734056411.036:90): prog-id=5 op=UNLOAD Dec 13 02:20:11.372111 kernel: audit: type=1334 audit(1734056411.038:91): prog-id=15 op=LOAD Dec 13 02:20:11.372149 kernel: audit: type=1334 audit(1734056411.038:92): prog-id=12 op=UNLOAD Dec 13 02:20:11.372168 kernel: audit: type=1334 audit(1734056411.039:93): prog-id=16 op=LOAD Dec 13 02:20:11.372187 kernel: audit: type=1334 audit(1734056411.040:94): prog-id=17 op=LOAD Dec 13 02:20:11.372208 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:20:11.372231 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:20:11.372251 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:20:11.372272 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:20:11.372295 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:20:11.372322 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:20:11.372345 systemd[1]: Created slice system-getty.slice. Dec 13 02:20:11.372367 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:20:11.372389 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:20:11.372409 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:20:11.372428 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:20:11.372451 systemd[1]: Created slice user.slice. Dec 13 02:20:11.372577 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:20:11.372599 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:20:11.372618 systemd[1]: Set up automount boot.automount. Dec 13 02:20:11.372636 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:20:11.372653 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:20:11.372671 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:20:11.372689 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:20:11.372707 systemd[1]: Reached target integritysetup.target. Dec 13 02:20:11.372726 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:20:11.372865 systemd[1]: Reached target remote-fs.target. Dec 13 02:20:11.372891 systemd[1]: Reached target slices.target. Dec 13 02:20:11.372909 systemd[1]: Reached target swap.target. Dec 13 02:20:11.372927 systemd[1]: Reached target torcx.target. Dec 13 02:20:11.372945 systemd[1]: Reached target veritysetup.target. Dec 13 02:20:11.372963 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:20:11.378184 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:20:11.378237 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:20:11.378257 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:20:11.378277 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:20:11.378340 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:20:11.378365 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:20:11.378388 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:20:11.378406 systemd[1]: Mounting media.mount... Dec 13 02:20:11.378423 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:20:11.378441 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:20:11.378460 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:20:11.378480 systemd[1]: Mounting tmp.mount... Dec 13 02:20:11.378500 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:20:11.378518 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:20:11.378540 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:20:11.378560 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:20:11.378580 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:20:11.378599 systemd[1]: Starting modprobe@drm.service... Dec 13 02:20:11.378618 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:20:11.378637 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:20:11.378655 systemd[1]: Starting modprobe@loop.service... Dec 13 02:20:11.378781 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:20:11.379505 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:20:11.379538 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:20:11.379559 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:20:11.379582 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:20:11.379604 systemd[1]: Stopped systemd-journald.service. Dec 13 02:20:11.379626 systemd[1]: Starting systemd-journald.service... Dec 13 02:20:11.379648 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:20:11.379668 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:20:11.379691 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:20:11.379713 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:20:11.379737 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:20:11.379758 systemd[1]: Stopped verity-setup.service. Dec 13 02:20:11.379779 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:20:11.379921 kernel: loop: module loaded Dec 13 02:20:11.379947 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:20:11.379968 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:20:11.379989 systemd[1]: Mounted media.mount. Dec 13 02:20:11.380017 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:20:11.380034 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:20:11.380056 systemd[1]: Mounted tmp.mount. Dec 13 02:20:11.380074 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:20:11.380093 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:20:11.380111 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:20:11.380133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:20:11.380158 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:20:11.380188 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:20:11.380210 systemd[1]: Finished modprobe@drm.service. Dec 13 02:20:11.380230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:20:11.380326 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:20:11.380348 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:20:11.380368 systemd[1]: Finished modprobe@loop.service. Dec 13 02:20:11.380386 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:20:11.380406 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:20:11.380425 kernel: fuse: init (API version 7.34) Dec 13 02:20:11.380571 systemd[1]: Reached target network-pre.target. Dec 13 02:20:11.380598 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:20:11.380624 systemd-journald[1237]: Journal started Dec 13 02:20:11.380709 systemd-journald[1237]: Runtime Journal (/run/log/journal/ec2df78ff6224b218282a46346d8b4b8) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:20:06.570000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:20:06.710000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:20:06.710000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:20:06.710000 audit: BPF prog-id=10 op=LOAD Dec 13 02:20:06.710000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:20:06.710000 audit: BPF prog-id=11 op=LOAD Dec 13 02:20:06.710000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:20:06.888000 audit[1165]: AVC avc: denied { associate } for pid=1165 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:20:06.888000 audit[1165]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1148 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:20:06.888000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:20:06.890000 audit[1165]: AVC avc: denied { associate } for pid=1165 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:20:06.890000 audit[1165]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1148 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:20:06.890000 audit: CWD cwd="/" Dec 13 02:20:06.890000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:06.890000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:06.890000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:20:11.034000 audit: BPF prog-id=12 op=LOAD Dec 13 02:20:11.034000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:20:11.035000 audit: BPF prog-id=13 op=LOAD Dec 13 02:20:11.036000 audit: BPF prog-id=14 op=LOAD Dec 13 02:20:11.036000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:20:11.036000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:20:11.038000 audit: BPF prog-id=15 op=LOAD Dec 13 02:20:11.038000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:20:11.039000 audit: BPF prog-id=16 op=LOAD Dec 13 02:20:11.040000 audit: BPF prog-id=17 op=LOAD Dec 13 02:20:11.040000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:20:11.040000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:20:11.041000 audit: BPF prog-id=18 op=LOAD Dec 13 02:20:11.041000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:20:11.043000 audit: BPF prog-id=19 op=LOAD Dec 13 02:20:11.044000 audit: BPF prog-id=20 op=LOAD Dec 13 02:20:11.044000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:20:11.044000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:20:11.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.052000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:20:11.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.264000 audit: BPF prog-id=21 op=LOAD Dec 13 02:20:11.264000 audit: BPF prog-id=22 op=LOAD Dec 13 02:20:11.264000 audit: BPF prog-id=23 op=LOAD Dec 13 02:20:11.264000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:20:11.264000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:20:11.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.364000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:20:11.364000 audit[1237]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffd0d375950 a2=4000 a3=7ffd0d3759ec items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:20:11.364000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:20:11.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:06.877600 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:20:11.033633 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:20:11.383976 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:20:06.878581 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:20:11.046714 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:20:06.878602 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:20:06.878636 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:20:06.878646 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:20:06.878682 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:20:06.878695 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:20:06.878927 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:20:06.878969 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:20:06.878982 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:20:06.888703 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:20:06.888750 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:20:06.888773 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:20:06.888809 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:20:06.888831 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:20:11.396103 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:20:11.396172 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:20:11.396199 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:20:11.396223 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:20:06.888844 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:20:10.403218 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:20:10.403905 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:20:10.404278 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:20:10.404681 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:20:10.404739 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:20:10.404858 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2024-12-13T02:20:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:20:11.402483 systemd[1]: Started systemd-journald.service. Dec 13 02:20:11.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.403929 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:20:11.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.404179 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:20:11.405687 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:20:11.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.407077 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:20:11.411020 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:20:11.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.414354 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:20:11.417179 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:20:11.423779 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:20:11.425231 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:20:11.426483 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:20:11.443212 systemd-journald[1237]: Time spent on flushing to /var/log/journal/ec2df78ff6224b218282a46346d8b4b8 is 67.694ms for 1176 entries. Dec 13 02:20:11.443212 systemd-journald[1237]: System Journal (/var/log/journal/ec2df78ff6224b218282a46346d8b4b8) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:20:11.525535 systemd-journald[1237]: Received client request to flush runtime journal. Dec 13 02:20:11.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.467467 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:20:11.526806 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:20:11.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.540712 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:20:11.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.543071 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:20:11.548622 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:20:11.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:11.551020 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:20:11.565846 udevadm[1281]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:20:11.594933 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:20:11.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.133534 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:20:12.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.133000 audit: BPF prog-id=24 op=LOAD Dec 13 02:20:12.133000 audit: BPF prog-id=25 op=LOAD Dec 13 02:20:12.133000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:20:12.133000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:20:12.135995 systemd[1]: Starting systemd-udevd.service... Dec 13 02:20:12.155732 systemd-udevd[1282]: Using default interface naming scheme 'v252'. Dec 13 02:20:12.199037 systemd[1]: Started systemd-udevd.service. Dec 13 02:20:12.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.199000 audit: BPF prog-id=26 op=LOAD Dec 13 02:20:12.205697 systemd[1]: Starting systemd-networkd.service... Dec 13 02:20:12.213000 audit: BPF prog-id=27 op=LOAD Dec 13 02:20:12.213000 audit: BPF prog-id=28 op=LOAD Dec 13 02:20:12.213000 audit: BPF prog-id=29 op=LOAD Dec 13 02:20:12.216687 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:20:12.268150 systemd[1]: Started systemd-userdbd.service. Dec 13 02:20:12.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.295465 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:20:12.338041 (udev-worker)[1294]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:12.361815 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:20:12.367814 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:20:12.367906 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 02:20:12.372808 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:20:12.396074 systemd-networkd[1289]: lo: Link UP Dec 13 02:20:12.396090 systemd-networkd[1289]: lo: Gained carrier Dec 13 02:20:12.397481 systemd-networkd[1289]: Enumeration completed Dec 13 02:20:12.397620 systemd[1]: Started systemd-networkd.service. Dec 13 02:20:12.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.398724 systemd-networkd[1289]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:20:12.400380 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:20:12.404519 systemd-networkd[1289]: eth0: Link UP Dec 13 02:20:12.404800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:20:12.404983 systemd-networkd[1289]: eth0: Gained carrier Dec 13 02:20:12.417228 systemd-networkd[1289]: eth0: DHCPv4 address 172.31.22.41/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:20:12.429000 audit[1285]: AVC avc: denied { confidentiality } for pid=1285 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:20:12.429000 audit[1285]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c05d4811f0 a1=337fc a2=7f803cdfbbc5 a3=5 items=110 ppid=1282 pid=1285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:20:12.429000 audit: CWD cwd="/" Dec 13 02:20:12.429000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=1 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=2 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=3 name=(null) inode=14640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=4 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=5 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=6 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=7 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=8 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=9 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=10 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=11 name=(null) inode=14644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=12 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=13 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=14 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=15 name=(null) inode=14646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=16 name=(null) inode=14642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=17 name=(null) inode=14647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=18 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=19 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=20 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=21 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=22 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=23 name=(null) inode=14650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=24 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=25 name=(null) inode=14651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=26 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=27 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=28 name=(null) inode=14648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=29 name=(null) inode=14653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=30 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=31 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=32 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=33 name=(null) inode=14655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=34 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=35 name=(null) inode=14656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=36 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=37 name=(null) inode=14657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=38 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=39 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=40 name=(null) inode=14654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=41 name=(null) inode=14659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=42 name=(null) inode=14639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=43 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=44 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=45 name=(null) inode=14661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=46 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=47 name=(null) inode=14662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=48 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=49 name=(null) inode=14663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=50 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=51 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=52 name=(null) inode=14660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=53 name=(null) inode=14665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=55 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=56 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=57 name=(null) inode=14667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=58 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=59 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=60 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=61 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=62 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=63 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=64 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=65 name=(null) inode=14671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=66 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=67 name=(null) inode=14672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=68 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=69 name=(null) inode=14673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=70 name=(null) inode=14669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=71 name=(null) inode=14674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=72 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=73 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=74 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=75 name=(null) inode=14676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=76 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=77 name=(null) inode=14677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=78 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=79 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=80 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=81 name=(null) inode=14679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=82 name=(null) inode=14675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=83 name=(null) inode=14680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=84 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=85 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=86 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=87 name=(null) inode=14682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=88 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=89 name=(null) inode=14683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=90 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=91 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=92 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=93 name=(null) inode=14685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=94 name=(null) inode=14681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=95 name=(null) inode=14686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=96 name=(null) inode=14666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=97 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=98 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=99 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=100 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=101 name=(null) inode=14689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=102 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=103 name=(null) inode=14690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=104 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=105 name=(null) inode=14691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=106 name=(null) inode=14687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=107 name=(null) inode=14692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PATH item=109 name=(null) inode=14693 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:20:12.429000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:20:12.477819 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:20:12.519936 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1285) Dec 13 02:20:12.544846 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 02:20:12.559813 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:20:12.662735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:20:12.747246 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:20:12.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.749597 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:20:12.776617 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:20:12.807118 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:20:12.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.808341 systemd[1]: Reached target cryptsetup.target. Dec 13 02:20:12.810552 systemd[1]: Starting lvm2-activation.service... Dec 13 02:20:12.819700 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:20:12.843375 systemd[1]: Finished lvm2-activation.service. Dec 13 02:20:12.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.844568 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:20:12.845465 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:20:12.845499 systemd[1]: Reached target local-fs.target. Dec 13 02:20:12.846599 systemd[1]: Reached target machines.target. Dec 13 02:20:12.849063 systemd[1]: Starting ldconfig.service... Dec 13 02:20:12.850810 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:20:12.850892 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:12.852476 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:20:12.854881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:20:12.858206 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:20:12.860524 systemd[1]: Starting systemd-sysext.service... Dec 13 02:20:12.895047 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1399 (bootctl) Dec 13 02:20:12.905108 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:20:12.909436 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:20:12.920187 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:20:12.920548 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:20:12.942366 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:20:12.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:12.945825 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:20:13.071829 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:20:13.071359 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:20:13.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.072228 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:20:13.109577 systemd-fsck[1410]: fsck.fat 4.2 (2021-01-31) Dec 13 02:20:13.109577 systemd-fsck[1410]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:20:13.137449 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:20:13.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.117965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:20:13.132631 systemd[1]: Mounting boot.mount... Dec 13 02:20:13.164151 systemd[1]: Mounted boot.mount. Dec 13 02:20:13.187276 (sd-sysext)[1414]: Using extensions 'kubernetes'. Dec 13 02:20:13.191504 (sd-sysext)[1414]: Merged extensions into '/usr'. Dec 13 02:20:13.247026 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:20:13.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.255724 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:20:13.257644 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:20:13.259888 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.264725 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:20:13.268443 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:20:13.273978 systemd[1]: Starting modprobe@loop.service... Dec 13 02:20:13.276244 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.276631 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:13.276821 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:20:13.285740 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:20:13.287191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:20:13.287321 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:20:13.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.289056 systemd[1]: Finished systemd-sysext.service. Dec 13 02:20:13.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.290235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:20:13.290357 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:20:13.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.292482 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:20:13.292717 systemd[1]: Finished modprobe@loop.service. Dec 13 02:20:13.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.299700 systemd[1]: Starting ensure-sysext.service... Dec 13 02:20:13.301595 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:20:13.301769 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.309965 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:20:13.321610 systemd[1]: Reloading. Dec 13 02:20:13.386676 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:20:13.401031 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:20:13.414050 systemd-tmpfiles[1433]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:20:13.480565 /usr/lib/systemd/system-generators/torcx-generator[1455]: time="2024-12-13T02:20:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:20:13.481180 /usr/lib/systemd/system-generators/torcx-generator[1455]: time="2024-12-13T02:20:13Z" level=info msg="torcx already run" Dec 13 02:20:13.668915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:20:13.669240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:20:13.702327 ldconfig[1398]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:20:13.705059 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:20:13.770000 audit: BPF prog-id=30 op=LOAD Dec 13 02:20:13.770000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:20:13.770000 audit: BPF prog-id=31 op=LOAD Dec 13 02:20:13.770000 audit: BPF prog-id=32 op=LOAD Dec 13 02:20:13.770000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:20:13.770000 audit: BPF prog-id=29 op=UNLOAD Dec 13 02:20:13.772000 audit: BPF prog-id=33 op=LOAD Dec 13 02:20:13.772000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:20:13.774000 audit: BPF prog-id=34 op=LOAD Dec 13 02:20:13.774000 audit: BPF prog-id=35 op=LOAD Dec 13 02:20:13.774000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:20:13.774000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:20:13.775000 audit: BPF prog-id=36 op=LOAD Dec 13 02:20:13.775000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:20:13.775000 audit: BPF prog-id=37 op=LOAD Dec 13 02:20:13.775000 audit: BPF prog-id=38 op=LOAD Dec 13 02:20:13.775000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:20:13.775000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:20:13.781105 systemd[1]: Finished ldconfig.service. Dec 13 02:20:13.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.784135 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:20:13.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.794084 systemd[1]: Starting audit-rules.service... Dec 13 02:20:13.797617 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:20:13.802351 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:20:13.806000 audit: BPF prog-id=39 op=LOAD Dec 13 02:20:13.812000 audit: BPF prog-id=40 op=LOAD Dec 13 02:20:13.810130 systemd[1]: Starting systemd-resolved.service... Dec 13 02:20:13.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.845000 audit[1512]: SYSTEM_BOOT pid=1512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.817507 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:20:13.822016 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:20:13.823854 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:20:13.835117 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:20:13.862378 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:20:13.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.865880 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.870030 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:20:13.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.873343 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:20:13.876170 systemd[1]: Starting modprobe@loop.service... Dec 13 02:20:13.877069 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.877278 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:13.877478 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:20:13.878774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:20:13.879692 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:20:13.881489 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:20:13.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.889403 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.891531 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:20:13.892515 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.892709 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:13.892919 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:20:13.893737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:20:13.893971 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:20:13.895977 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:20:13.896245 systemd[1]: Finished modprobe@loop.service. Dec 13 02:20:13.897820 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.904650 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.907544 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:20:13.911157 systemd[1]: Starting modprobe@drm.service... Dec 13 02:20:13.914143 systemd[1]: Starting modprobe@loop.service... Dec 13 02:20:13.915199 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.915420 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:13.915645 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:20:13.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.923469 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:20:13.925406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:20:13.925593 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:20:13.927435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:20:13.927610 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:20:13.929351 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:20:13.929479 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:20:13.933051 systemd[1]: Starting systemd-update-done.service... Dec 13 02:20:13.933973 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:20:13.941261 systemd[1]: Finished ensure-sysext.service. Dec 13 02:20:13.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.943830 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:20:13.944015 systemd[1]: Finished modprobe@loop.service. Dec 13 02:20:13.945093 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:20:13.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:20:13.947910 systemd[1]: Finished systemd-update-done.service. Dec 13 02:20:13.949301 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:20:13.949459 systemd[1]: Finished modprobe@drm.service. Dec 13 02:20:13.987000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:20:13.987000 audit[1534]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce7856670 a2=420 a3=0 items=0 ppid=1506 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:20:13.987000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:20:13.989220 augenrules[1534]: No rules Dec 13 02:20:13.990026 systemd[1]: Finished audit-rules.service. Dec 13 02:20:14.024416 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:20:14.026362 systemd[1]: Reached target time-set.target. Dec 13 02:20:14.037402 systemd-resolved[1510]: Positive Trust Anchors: Dec 13 02:20:14.037700 systemd-resolved[1510]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:20:14.037776 systemd-resolved[1510]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:20:14.070220 systemd-resolved[1510]: Defaulting to hostname 'linux'. Dec 13 02:20:14.071971 systemd[1]: Started systemd-resolved.service. Dec 13 02:20:14.073094 systemd[1]: Reached target network.target. Dec 13 02:20:14.073917 systemd[1]: Reached target nss-lookup.target. Dec 13 02:20:14.074868 systemd[1]: Reached target sysinit.target. Dec 13 02:20:14.075762 systemd[1]: Started motdgen.path. Dec 13 02:20:14.076598 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:20:14.077960 systemd[1]: Started logrotate.timer. Dec 13 02:20:14.078838 systemd[1]: Started mdadm.timer. Dec 13 02:20:14.079532 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:20:14.080631 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:20:14.080663 systemd[1]: Reached target paths.target. Dec 13 02:20:14.081404 systemd[1]: Reached target timers.target. Dec 13 02:20:14.082463 systemd[1]: Listening on dbus.socket. Dec 13 02:20:14.084392 systemd[1]: Starting docker.socket... Dec 13 02:20:14.090047 systemd[1]: Listening on sshd.socket. Dec 13 02:20:14.091035 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:14.091536 systemd[1]: Listening on docker.socket. Dec 13 02:20:14.092468 systemd[1]: Reached target sockets.target. Dec 13 02:20:14.093341 systemd[1]: Reached target basic.target. Dec 13 02:20:14.094284 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:20:14.094313 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:20:14.095545 systemd[1]: Starting containerd.service... Dec 13 02:20:14.098287 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:20:14.101612 systemd[1]: Starting dbus.service... Dec 13 02:20:14.105278 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:20:14.107869 systemd[1]: Starting extend-filesystems.service... Dec 13 02:20:14.108559 systemd-networkd[1289]: eth0: Gained IPv6LL Dec 13 02:20:14.108960 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:20:14.110493 systemd[1]: Starting motdgen.service... Dec 13 02:20:14.112633 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:20:14.114737 systemd[1]: Starting sshd-keygen.service... Dec 13 02:20:14.119087 systemd[1]: Starting systemd-logind.service... Dec 13 02:20:14.125154 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:20:14.127518 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:20:14.128233 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:20:14.133407 systemd[1]: Starting update-engine.service... Dec 13 02:20:14.200934 jq[1545]: false Dec 13 02:20:14.137717 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:20:14.149010 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:20:14.151268 systemd[1]: Reached target network-online.target. Dec 13 02:20:14.155456 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:20:14.162459 systemd[1]: Starting kubelet.service... Dec 13 02:20:14.166416 systemd[1]: Started nvidia.service. Dec 13 02:20:14.266650 jq[1552]: true Dec 13 02:20:14.169065 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:20:14.169336 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:20:14.180260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:20:14.181267 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:20:14.267648 systemd-timesyncd[1511]: Contacted time server 23.168.136.132:123 (0.flatcar.pool.ntp.org). Dec 13 02:20:14.267740 systemd-timesyncd[1511]: Initial clock synchronization to Fri 2024-12-13 02:20:14.348271 UTC. Dec 13 02:20:14.398620 jq[1574]: true Dec 13 02:20:14.400652 extend-filesystems[1546]: Found loop1 Dec 13 02:20:14.401873 extend-filesystems[1546]: Found nvme0n1 Dec 13 02:20:14.401873 extend-filesystems[1546]: Found nvme0n1p1 Dec 13 02:20:14.401873 extend-filesystems[1546]: Found nvme0n1p2 Dec 13 02:20:14.409316 dbus-daemon[1544]: [system] SELinux support is enabled Dec 13 02:20:14.409522 systemd[1]: Started dbus.service. Dec 13 02:20:14.413416 dbus-daemon[1544]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1289 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:20:14.413635 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:20:14.413672 systemd[1]: Reached target system-config.target. Dec 13 02:20:14.424163 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:20:14.424199 systemd[1]: Reached target user-config.target. Dec 13 02:20:14.437165 extend-filesystems[1546]: Found nvme0n1p3 Dec 13 02:20:14.437165 extend-filesystems[1546]: Found usr Dec 13 02:20:14.437165 extend-filesystems[1546]: Found nvme0n1p4 Dec 13 02:20:14.437165 extend-filesystems[1546]: Found nvme0n1p6 Dec 13 02:20:14.437165 extend-filesystems[1546]: Found nvme0n1p7 Dec 13 02:20:14.437165 extend-filesystems[1546]: Found nvme0n1p9 Dec 13 02:20:14.437165 extend-filesystems[1546]: Checking size of /dev/nvme0n1p9 Dec 13 02:20:14.445021 dbus-daemon[1544]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:20:14.459290 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:20:14.459508 systemd[1]: Finished motdgen.service. Dec 13 02:20:14.464949 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:20:14.496812 extend-filesystems[1546]: Resized partition /dev/nvme0n1p9 Dec 13 02:20:14.525894 extend-filesystems[1594]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:20:14.538262 amazon-ssm-agent[1555]: 2024/12/13 02:20:14 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:20:14.538805 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:20:14.571023 amazon-ssm-agent[1555]: Initializing new seelog logger Dec 13 02:20:14.571023 amazon-ssm-agent[1555]: New Seelog Logger Creation Complete Dec 13 02:20:14.571023 amazon-ssm-agent[1555]: 2024/12/13 02:20:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:20:14.571023 amazon-ssm-agent[1555]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:20:14.571023 amazon-ssm-agent[1555]: 2024/12/13 02:20:14 processing appconfig overrides Dec 13 02:20:14.606799 update_engine[1551]: I1213 02:20:14.605636 1551 main.cc:92] Flatcar Update Engine starting Dec 13 02:20:14.613428 systemd[1]: Started update-engine.service. Dec 13 02:20:14.614095 update_engine[1551]: I1213 02:20:14.613909 1551 update_check_scheduler.cc:74] Next update check in 5m5s Dec 13 02:20:14.616882 systemd[1]: Started locksmithd.service. Dec 13 02:20:14.618803 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:20:14.635625 extend-filesystems[1594]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:20:14.635625 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:20:14.635625 extend-filesystems[1594]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:20:14.639746 extend-filesystems[1546]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:20:14.639757 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:20:14.640969 systemd[1]: Finished extend-filesystems.service. Dec 13 02:20:14.645510 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:20:14.647185 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:20:14.667308 env[1567]: time="2024-12-13T02:20:14.667242726Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:20:14.763818 systemd-logind[1550]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:20:14.764352 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:20:14.765875 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:20:14.766263 systemd-logind[1550]: New seat seat0. Dec 13 02:20:14.768107 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:20:14.770564 systemd[1]: Started systemd-logind.service. Dec 13 02:20:14.856529 env[1567]: time="2024-12-13T02:20:14.856427738Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:20:14.858984 dbus-daemon[1544]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:20:14.859186 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:20:14.859916 dbus-daemon[1544]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1586 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:20:14.860760 env[1567]: time="2024-12-13T02:20:14.860724216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:20:14.862727 env[1567]: time="2024-12-13T02:20:14.862683541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:20:14.864266 systemd[1]: Starting polkit.service... Dec 13 02:20:14.867202 env[1567]: time="2024-12-13T02:20:14.867164673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:20:14.867692 env[1567]: time="2024-12-13T02:20:14.867660521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:20:14.867872 env[1567]: time="2024-12-13T02:20:14.867852297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:20:14.867962 env[1567]: time="2024-12-13T02:20:14.867943345Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:20:14.868055 env[1567]: time="2024-12-13T02:20:14.868040279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:20:14.868232 env[1567]: time="2024-12-13T02:20:14.868214863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:20:14.869486 env[1567]: time="2024-12-13T02:20:14.869458131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:20:14.874258 env[1567]: time="2024-12-13T02:20:14.874210312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:20:14.876165 env[1567]: time="2024-12-13T02:20:14.876110725Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:20:14.881390 env[1567]: time="2024-12-13T02:20:14.881347324Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:20:14.881583 env[1567]: time="2024-12-13T02:20:14.881561315Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:20:14.885745 polkitd[1644]: Started polkitd version 121 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893196072Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893249230Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893273735Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893323291Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893344832Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893365210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893383999Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893404189Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893424621Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893445003Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893464150Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.893685 env[1567]: time="2024-12-13T02:20:14.893483736Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895115893Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895341851Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895747555Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895811882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895835363Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895919127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895955009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.895975088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896002440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896036411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896056808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896074675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896092929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896128974Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:20:14.897771 env[1567]: time="2024-12-13T02:20:14.896318365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896357407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896379983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896400000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896442120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896463464Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896502068Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:20:14.899452 env[1567]: time="2024-12-13T02:20:14.896546744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:20:14.900572 env[1567]: time="2024-12-13T02:20:14.896949151Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:20:14.900572 env[1567]: time="2024-12-13T02:20:14.897063610Z" level=info msg="Connect containerd service" Dec 13 02:20:14.900572 env[1567]: time="2024-12-13T02:20:14.897112278Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:20:14.906229 env[1567]: time="2024-12-13T02:20:14.902765651Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:20:14.906229 env[1567]: time="2024-12-13T02:20:14.904219729Z" level=info msg="Start subscribing containerd event" Dec 13 02:20:14.906229 env[1567]: time="2024-12-13T02:20:14.904322443Z" level=info msg="Start recovering state" Dec 13 02:20:14.906229 env[1567]: time="2024-12-13T02:20:14.904440263Z" level=info msg="Start event monitor" Dec 13 02:20:14.906229 env[1567]: time="2024-12-13T02:20:14.904458796Z" level=info msg="Start snapshots syncer" Dec 13 02:20:14.907074 env[1567]: time="2024-12-13T02:20:14.907043676Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:20:14.915089 env[1567]: time="2024-12-13T02:20:14.914984419Z" level=info msg="Start streaming server" Dec 13 02:20:14.917639 polkitd[1644]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:20:14.919124 env[1567]: time="2024-12-13T02:20:14.919055679Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:20:14.922205 polkitd[1644]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:20:14.922594 env[1567]: time="2024-12-13T02:20:14.922545822Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:20:14.922898 env[1567]: time="2024-12-13T02:20:14.922879436Z" level=info msg="containerd successfully booted in 0.294750s" Dec 13 02:20:14.922901 systemd[1]: Started containerd.service. Dec 13 02:20:14.928996 polkitd[1644]: Finished loading, compiling and executing 2 rules Dec 13 02:20:14.929714 dbus-daemon[1544]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:20:14.930038 systemd[1]: Started polkit.service. Dec 13 02:20:14.930606 polkitd[1644]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:20:14.952076 systemd-hostnamed[1586]: Hostname set to (transient) Dec 13 02:20:14.952196 systemd-resolved[1510]: System hostname changed to 'ip-172-31-22-41'. Dec 13 02:20:15.185993 coreos-metadata[1543]: Dec 13 02:20:15.183 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:20:15.196360 coreos-metadata[1543]: Dec 13 02:20:15.193 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:20:15.200237 coreos-metadata[1543]: Dec 13 02:20:15.199 INFO Fetch successful Dec 13 02:20:15.200415 coreos-metadata[1543]: Dec 13 02:20:15.200 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:20:15.201982 coreos-metadata[1543]: Dec 13 02:20:15.201 INFO Fetch successful Dec 13 02:20:15.227100 unknown[1543]: wrote ssh authorized keys file for user: core Dec 13 02:20:15.245616 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Create new startup processor Dec 13 02:20:15.248282 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:20:15.248282 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing bookkeeping folders Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO removing the completed state files Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing healthcheck folders for long running plugins Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing locations for inventory plugin Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing default location for custom inventory Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing default location for file inventory Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Initializing default location for role inventory Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Init the cloudwatchlogs publisher Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:20:15.248456 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:20:15.249084 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:20:15.249084 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:20:15.249084 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:20:15.249084 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO OS: linux, Arch: amd64 Dec 13 02:20:15.250583 amazon-ssm-agent[1555]: datastore file /var/lib/amazon/ssm/i-0c814d087c5197e95/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:20:15.256397 update-ssh-keys[1717]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:20:15.256990 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:20:15.350254 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:20:15.444935 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:20:15.540997 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:20:15.626949 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:20:15.635480 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:20:15.730258 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:20:15.825133 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [instanceID=i-0c814d087c5197e95] Starting association polling Dec 13 02:20:15.922018 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:20:16.017929 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:20:16.115476 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:20:16.209016 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:20:16.267049 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:20:16.304937 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:20:16.315461 systemd[1]: Finished sshd-keygen.service. Dec 13 02:20:16.329635 systemd[1]: Starting issuegen.service... Dec 13 02:20:16.349357 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:20:16.349582 systemd[1]: Finished issuegen.service. Dec 13 02:20:16.353051 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:20:16.374813 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:20:16.381015 systemd[1]: Started getty@tty1.service. Dec 13 02:20:16.384011 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:20:16.385371 systemd[1]: Reached target getty.target. Dec 13 02:20:16.400857 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:20:16.451187 systemd[1]: Started kubelet.service. Dec 13 02:20:16.452934 systemd[1]: Reached target multi-user.target. Dec 13 02:20:16.455720 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:20:16.470527 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:20:16.470851 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:20:16.472158 systemd[1]: Startup finished in 809ms (kernel) + 7.545s (initrd) + 10.021s (userspace) = 18.377s. Dec 13 02:20:16.497747 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:20:16.594266 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:20:16.691143 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0c814d087c5197e95, requestId: dc71106c-06cd-4803-8c89-5d440eaea72f Dec 13 02:20:16.788232 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [OfflineService] Starting document processing engine... Dec 13 02:20:16.885396 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:20:16.982754 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:20:17.080058 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [OfflineService] Starting message polling Dec 13 02:20:17.178369 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [OfflineService] Starting send replies to MDS Dec 13 02:20:17.276172 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:20:17.374181 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:20:17.472584 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:20:17.572452 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:20:17.669778 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] listening reply. Dec 13 02:20:17.768804 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:20:17.850587 kubelet[1749]: E1213 02:20:17.850442 1749 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:20:17.858742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:20:17.858938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:20:17.859873 systemd[1]: kubelet.service: Consumed 1.192s CPU time. Dec 13 02:20:17.867747 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:20:17.967202 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:20:18.066887 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:20:18.166744 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c814d087c5197e95?role=subscribe&stream=input Dec 13 02:20:18.267016 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c814d087c5197e95?role=subscribe&stream=input Dec 13 02:20:18.367391 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:20:18.467967 amazon-ssm-agent[1555]: 2024-12-13 02:20:15 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:20:21.242268 amazon-ssm-agent[1555]: 2024-12-13 02:20:21 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:20:23.494580 systemd[1]: Created slice system-sshd.slice. Dec 13 02:20:23.496625 systemd[1]: Started sshd@0-172.31.22.41:22-139.178.68.195:51022.service. Dec 13 02:20:23.689385 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 51022 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:23.692432 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:23.707902 systemd[1]: Created slice user-500.slice. Dec 13 02:20:23.709373 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:20:23.713838 systemd-logind[1550]: New session 1 of user core. Dec 13 02:20:23.724382 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:20:23.726530 systemd[1]: Starting user@500.service... Dec 13 02:20:23.731261 (systemd)[1761]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:23.834471 systemd[1761]: Queued start job for default target default.target. Dec 13 02:20:23.835688 systemd[1761]: Reached target paths.target. Dec 13 02:20:23.836775 systemd[1761]: Reached target sockets.target. Dec 13 02:20:23.836809 systemd[1761]: Reached target timers.target. Dec 13 02:20:23.836828 systemd[1761]: Reached target basic.target. Dec 13 02:20:23.836961 systemd[1]: Started user@500.service. Dec 13 02:20:23.838336 systemd[1]: Started session-1.scope. Dec 13 02:20:23.838893 systemd[1761]: Reached target default.target. Dec 13 02:20:23.839099 systemd[1761]: Startup finished in 100ms. Dec 13 02:20:23.988966 systemd[1]: Started sshd@1-172.31.22.41:22-139.178.68.195:51038.service. Dec 13 02:20:24.164801 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 51038 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:24.166234 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:24.172663 systemd-logind[1550]: New session 2 of user core. Dec 13 02:20:24.173286 systemd[1]: Started session-2.scope. Dec 13 02:20:24.298759 sshd[1770]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:24.303070 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:20:24.303507 systemd[1]: sshd@1-172.31.22.41:22-139.178.68.195:51038.service: Deactivated successfully. Dec 13 02:20:24.304672 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:20:24.305507 systemd-logind[1550]: Removed session 2. Dec 13 02:20:24.327905 systemd[1]: Started sshd@2-172.31.22.41:22-139.178.68.195:51054.service. Dec 13 02:20:24.493382 sshd[1777]: Accepted publickey for core from 139.178.68.195 port 51054 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:24.495442 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:24.503779 systemd-logind[1550]: New session 3 of user core. Dec 13 02:20:24.504436 systemd[1]: Started session-3.scope. Dec 13 02:20:24.627020 sshd[1777]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:24.630816 systemd[1]: sshd@2-172.31.22.41:22-139.178.68.195:51054.service: Deactivated successfully. Dec 13 02:20:24.632363 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:20:24.633086 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:20:24.634418 systemd-logind[1550]: Removed session 3. Dec 13 02:20:24.652179 systemd[1]: Started sshd@3-172.31.22.41:22-139.178.68.195:51060.service. Dec 13 02:20:24.816270 sshd[1783]: Accepted publickey for core from 139.178.68.195 port 51060 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:24.818122 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:24.823747 systemd[1]: Started session-4.scope. Dec 13 02:20:24.824242 systemd-logind[1550]: New session 4 of user core. Dec 13 02:20:24.958881 sshd[1783]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:24.963062 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:20:24.963538 systemd[1]: sshd@3-172.31.22.41:22-139.178.68.195:51060.service: Deactivated successfully. Dec 13 02:20:24.965288 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:20:24.968970 systemd-logind[1550]: Removed session 4. Dec 13 02:20:24.995698 systemd[1]: Started sshd@4-172.31.22.41:22-139.178.68.195:51068.service. Dec 13 02:20:25.185232 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 51068 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:25.187212 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:25.194275 systemd[1]: Started session-5.scope. Dec 13 02:20:25.194991 systemd-logind[1550]: New session 5 of user core. Dec 13 02:20:25.317222 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:20:25.317555 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:20:25.338469 systemd[1]: Starting coreos-metadata.service... Dec 13 02:20:25.441674 coreos-metadata[1796]: Dec 13 02:20:25.441 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:20:25.442653 coreos-metadata[1796]: Dec 13 02:20:25.442 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Dec 13 02:20:25.443566 coreos-metadata[1796]: Dec 13 02:20:25.443 INFO Fetch successful Dec 13 02:20:25.443670 coreos-metadata[1796]: Dec 13 02:20:25.443 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Dec 13 02:20:25.444062 coreos-metadata[1796]: Dec 13 02:20:25.443 INFO Fetch successful Dec 13 02:20:25.444165 coreos-metadata[1796]: Dec 13 02:20:25.444 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Dec 13 02:20:25.446445 coreos-metadata[1796]: Dec 13 02:20:25.446 INFO Fetch successful Dec 13 02:20:25.446544 coreos-metadata[1796]: Dec 13 02:20:25.446 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Dec 13 02:20:25.447483 coreos-metadata[1796]: Dec 13 02:20:25.447 INFO Fetch successful Dec 13 02:20:25.447639 coreos-metadata[1796]: Dec 13 02:20:25.447 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Dec 13 02:20:25.448394 coreos-metadata[1796]: Dec 13 02:20:25.448 INFO Fetch successful Dec 13 02:20:25.448516 coreos-metadata[1796]: Dec 13 02:20:25.448 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Dec 13 02:20:25.449126 coreos-metadata[1796]: Dec 13 02:20:25.449 INFO Fetch successful Dec 13 02:20:25.449281 coreos-metadata[1796]: Dec 13 02:20:25.449 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Dec 13 02:20:25.449962 coreos-metadata[1796]: Dec 13 02:20:25.449 INFO Fetch successful Dec 13 02:20:25.450117 coreos-metadata[1796]: Dec 13 02:20:25.449 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Dec 13 02:20:25.450682 coreos-metadata[1796]: Dec 13 02:20:25.450 INFO Fetch successful Dec 13 02:20:25.461417 systemd[1]: Finished coreos-metadata.service. Dec 13 02:20:26.986857 systemd[1]: Stopped kubelet.service. Dec 13 02:20:26.988181 systemd[1]: kubelet.service: Consumed 1.192s CPU time. Dec 13 02:20:26.991842 systemd[1]: Starting kubelet.service... Dec 13 02:20:27.051536 systemd[1]: Reloading. Dec 13 02:20:27.193656 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2024-12-13T02:20:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:20:27.200913 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2024-12-13T02:20:27Z" level=info msg="torcx already run" Dec 13 02:20:27.317934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:20:27.317961 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:20:27.338816 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:20:27.451235 systemd[1]: Started kubelet.service. Dec 13 02:20:27.454043 systemd[1]: Stopping kubelet.service... Dec 13 02:20:27.454705 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:20:27.454933 systemd[1]: Stopped kubelet.service. Dec 13 02:20:27.456931 systemd[1]: Starting kubelet.service... Dec 13 02:20:27.602574 systemd[1]: Started kubelet.service. Dec 13 02:20:27.674185 kubelet[1915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:20:27.674512 kubelet[1915]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:20:27.674553 kubelet[1915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:20:27.674681 kubelet[1915]: I1213 02:20:27.674654 1915 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:20:28.180808 kubelet[1915]: I1213 02:20:28.180761 1915 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:20:28.180974 kubelet[1915]: I1213 02:20:28.180819 1915 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:20:28.181110 kubelet[1915]: I1213 02:20:28.181088 1915 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:20:28.236445 kubelet[1915]: I1213 02:20:28.236396 1915 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:20:28.256092 kubelet[1915]: I1213 02:20:28.256056 1915 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:20:28.258026 kubelet[1915]: I1213 02:20:28.257992 1915 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:20:28.258324 kubelet[1915]: I1213 02:20:28.258297 1915 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:20:28.258516 kubelet[1915]: I1213 02:20:28.258332 1915 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:20:28.258516 kubelet[1915]: I1213 02:20:28.258347 1915 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:20:28.258516 kubelet[1915]: I1213 02:20:28.258470 1915 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:20:28.258751 kubelet[1915]: I1213 02:20:28.258630 1915 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:20:28.258751 kubelet[1915]: I1213 02:20:28.258653 1915 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:20:28.259109 kubelet[1915]: I1213 02:20:28.259090 1915 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:20:28.259197 kubelet[1915]: I1213 02:20:28.259116 1915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:20:28.259347 kubelet[1915]: E1213 02:20:28.259322 1915 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:28.259408 kubelet[1915]: E1213 02:20:28.259388 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:28.260996 kubelet[1915]: I1213 02:20:28.260973 1915 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:20:28.264930 kubelet[1915]: I1213 02:20:28.264902 1915 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:20:28.266543 kubelet[1915]: W1213 02:20:28.266517 1915 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:20:28.267269 kubelet[1915]: I1213 02:20:28.267249 1915 server.go:1256] "Started kubelet" Dec 13 02:20:28.270323 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:20:28.270463 kubelet[1915]: I1213 02:20:28.270302 1915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:20:28.280348 kubelet[1915]: I1213 02:20:28.280301 1915 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:20:28.281781 kubelet[1915]: I1213 02:20:28.281718 1915 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:20:28.284811 kubelet[1915]: I1213 02:20:28.283368 1915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:20:28.284811 kubelet[1915]: I1213 02:20:28.283635 1915 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:20:28.284811 kubelet[1915]: W1213 02:20:28.284525 1915 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.22.41" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 02:20:28.284811 kubelet[1915]: E1213 02:20:28.284569 1915 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.41" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 02:20:28.284811 kubelet[1915]: W1213 02:20:28.284689 1915 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 02:20:28.284811 kubelet[1915]: E1213 02:20:28.284725 1915 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 02:20:28.291704 kubelet[1915]: I1213 02:20:28.291674 1915 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:20:28.292501 kubelet[1915]: I1213 02:20:28.292482 1915 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:20:28.292675 kubelet[1915]: I1213 02:20:28.292664 1915 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:20:28.294666 kubelet[1915]: E1213 02:20:28.294648 1915 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:20:28.297102 kubelet[1915]: I1213 02:20:28.297071 1915 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:20:28.300106 kubelet[1915]: I1213 02:20:28.300080 1915 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:20:28.300106 kubelet[1915]: I1213 02:20:28.300100 1915 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:20:28.302354 kubelet[1915]: W1213 02:20:28.302330 1915 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 02:20:28.302522 kubelet[1915]: E1213 02:20:28.302509 1915 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 02:20:28.306992 kubelet[1915]: E1213 02:20:28.306966 1915 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.22.41.18109b225359a23c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.22.41,UID:172.31.22.41,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.22.41,},FirstTimestamp:2024-12-13 02:20:28.267217468 +0000 UTC m=+0.658872467,LastTimestamp:2024-12-13 02:20:28.267217468 +0000 UTC m=+0.658872467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.22.41,}" Dec 13 02:20:28.307488 kubelet[1915]: E1213 02:20:28.307465 1915 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.22.41\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 02:20:28.325153 kubelet[1915]: I1213 02:20:28.325117 1915 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:20:28.325153 kubelet[1915]: I1213 02:20:28.325144 1915 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:20:28.325356 kubelet[1915]: I1213 02:20:28.325174 1915 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:20:28.327537 kubelet[1915]: I1213 02:20:28.327500 1915 policy_none.go:49] "None policy: Start" Dec 13 02:20:28.328312 kubelet[1915]: I1213 02:20:28.328296 1915 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:20:28.328555 kubelet[1915]: I1213 02:20:28.328541 1915 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:20:28.341983 systemd[1]: Created slice kubepods.slice. Dec 13 02:20:28.350881 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:20:28.354422 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:20:28.360454 kubelet[1915]: I1213 02:20:28.360413 1915 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:20:28.360678 kubelet[1915]: I1213 02:20:28.360656 1915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:20:28.363936 kubelet[1915]: E1213 02:20:28.363483 1915 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.41\" not found" Dec 13 02:20:28.393814 kubelet[1915]: I1213 02:20:28.393779 1915 kubelet_node_status.go:73] "Attempting to register node" node="172.31.22.41" Dec 13 02:20:28.402522 kubelet[1915]: I1213 02:20:28.402493 1915 kubelet_node_status.go:76] "Successfully registered node" node="172.31.22.41" Dec 13 02:20:28.426349 kubelet[1915]: I1213 02:20:28.426314 1915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:20:28.427898 kubelet[1915]: I1213 02:20:28.427872 1915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:20:28.428017 kubelet[1915]: I1213 02:20:28.427909 1915 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:20:28.428017 kubelet[1915]: I1213 02:20:28.427929 1915 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:20:28.428017 kubelet[1915]: E1213 02:20:28.427980 1915 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:20:28.435703 kubelet[1915]: E1213 02:20:28.433290 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:28.533643 kubelet[1915]: E1213 02:20:28.533601 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:28.634286 kubelet[1915]: E1213 02:20:28.634235 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:28.735054 kubelet[1915]: E1213 02:20:28.734881 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:28.835821 kubelet[1915]: E1213 02:20:28.835763 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:28.936549 kubelet[1915]: E1213 02:20:28.936498 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.037363 kubelet[1915]: E1213 02:20:29.037246 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.126850 sudo[1792]: pam_unix(sudo:session): session closed for user root Dec 13 02:20:29.138040 kubelet[1915]: E1213 02:20:29.137995 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.152084 sshd[1789]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:29.157918 systemd[1]: sshd@4-172.31.22.41:22-139.178.68.195:51068.service: Deactivated successfully. Dec 13 02:20:29.161933 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:20:29.163387 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:20:29.165872 systemd-logind[1550]: Removed session 5. Dec 13 02:20:29.195748 kubelet[1915]: I1213 02:20:29.195701 1915 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:20:29.196081 kubelet[1915]: W1213 02:20:29.195957 1915 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:20:29.239223 kubelet[1915]: E1213 02:20:29.239124 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.260461 kubelet[1915]: E1213 02:20:29.260412 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:29.339714 kubelet[1915]: E1213 02:20:29.339597 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.440319 kubelet[1915]: E1213 02:20:29.440236 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.541021 kubelet[1915]: E1213 02:20:29.540972 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.641749 kubelet[1915]: E1213 02:20:29.641635 1915 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.22.41\" not found" Dec 13 02:20:29.742764 kubelet[1915]: I1213 02:20:29.742728 1915 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:20:29.743524 env[1567]: time="2024-12-13T02:20:29.743478922Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:20:29.743928 kubelet[1915]: I1213 02:20:29.743716 1915 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:20:30.261440 kubelet[1915]: E1213 02:20:30.261396 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:30.261440 kubelet[1915]: I1213 02:20:30.261399 1915 apiserver.go:52] "Watching apiserver" Dec 13 02:20:30.269048 kubelet[1915]: I1213 02:20:30.269011 1915 topology_manager.go:215] "Topology Admit Handler" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" podNamespace="kube-system" podName="cilium-7bcdr" Dec 13 02:20:30.269231 kubelet[1915]: I1213 02:20:30.269162 1915 topology_manager.go:215] "Topology Admit Handler" podUID="7d3d824e-0359-40f4-89e1-b38936043682" podNamespace="kube-system" podName="kube-proxy-k7w78" Dec 13 02:20:30.276490 systemd[1]: Created slice kubepods-besteffort-pod7d3d824e_0359_40f4_89e1_b38936043682.slice. Dec 13 02:20:30.286823 systemd[1]: Created slice kubepods-burstable-pode06e8fbc_4902_40d9_a183_dd08630570d2.slice. Dec 13 02:20:30.293422 kubelet[1915]: I1213 02:20:30.293390 1915 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:20:30.305783 kubelet[1915]: I1213 02:20:30.305748 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-hostproc\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306008 kubelet[1915]: I1213 02:20:30.305816 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-net\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306008 kubelet[1915]: I1213 02:20:30.305891 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-kernel\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306008 kubelet[1915]: I1213 02:20:30.305917 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-hubble-tls\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306008 kubelet[1915]: I1213 02:20:30.305948 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-run\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306008 kubelet[1915]: I1213 02:20:30.305974 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-lib-modules\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306223 kubelet[1915]: I1213 02:20:30.306045 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d3d824e-0359-40f4-89e1-b38936043682-xtables-lock\") pod \"kube-proxy-k7w78\" (UID: \"7d3d824e-0359-40f4-89e1-b38936043682\") " pod="kube-system/kube-proxy-k7w78" Dec 13 02:20:30.306223 kubelet[1915]: I1213 02:20:30.306106 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvvdn\" (UniqueName: \"kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-kube-api-access-fvvdn\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306223 kubelet[1915]: I1213 02:20:30.306140 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d3d824e-0359-40f4-89e1-b38936043682-kube-proxy\") pod \"kube-proxy-k7w78\" (UID: \"7d3d824e-0359-40f4-89e1-b38936043682\") " pod="kube-system/kube-proxy-k7w78" Dec 13 02:20:30.306223 kubelet[1915]: I1213 02:20:30.306170 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cni-path\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306223 kubelet[1915]: I1213 02:20:30.306201 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-etc-cni-netd\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306427 kubelet[1915]: I1213 02:20:30.306231 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-xtables-lock\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306427 kubelet[1915]: I1213 02:20:30.306271 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e06e8fbc-4902-40d9-a183-dd08630570d2-clustermesh-secrets\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306427 kubelet[1915]: I1213 02:20:30.306302 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-config-path\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306427 kubelet[1915]: I1213 02:20:30.306333 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d3d824e-0359-40f4-89e1-b38936043682-lib-modules\") pod \"kube-proxy-k7w78\" (UID: \"7d3d824e-0359-40f4-89e1-b38936043682\") " pod="kube-system/kube-proxy-k7w78" Dec 13 02:20:30.306427 kubelet[1915]: I1213 02:20:30.306380 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-bpf-maps\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306427 kubelet[1915]: I1213 02:20:30.306410 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-cgroup\") pod \"cilium-7bcdr\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " pod="kube-system/cilium-7bcdr" Dec 13 02:20:30.306674 kubelet[1915]: I1213 02:20:30.306443 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vsb\" (UniqueName: \"kubernetes.io/projected/7d3d824e-0359-40f4-89e1-b38936043682-kube-api-access-r5vsb\") pod \"kube-proxy-k7w78\" (UID: \"7d3d824e-0359-40f4-89e1-b38936043682\") " pod="kube-system/kube-proxy-k7w78" Dec 13 02:20:30.586389 env[1567]: time="2024-12-13T02:20:30.586256822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7w78,Uid:7d3d824e-0359-40f4-89e1-b38936043682,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:30.594781 env[1567]: time="2024-12-13T02:20:30.594732704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bcdr,Uid:e06e8fbc-4902-40d9-a183-dd08630570d2,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:31.133073 env[1567]: time="2024-12-13T02:20:31.133022792Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.138266 env[1567]: time="2024-12-13T02:20:31.138214497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.141168 env[1567]: time="2024-12-13T02:20:31.141117045Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.143396 env[1567]: time="2024-12-13T02:20:31.143340983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.144411 env[1567]: time="2024-12-13T02:20:31.144374479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.146700 env[1567]: time="2024-12-13T02:20:31.146663170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.147766 env[1567]: time="2024-12-13T02:20:31.147733327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.149883 env[1567]: time="2024-12-13T02:20:31.149690359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:31.182252 env[1567]: time="2024-12-13T02:20:31.182180340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:31.182478 env[1567]: time="2024-12-13T02:20:31.182449411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:31.182638 env[1567]: time="2024-12-13T02:20:31.182561576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:31.182936 env[1567]: time="2024-12-13T02:20:31.182902197Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ce6df2ff9665f00b1dc126ff5dc8db61cd78bb18a93e576bae1178097247eae pid=1972 runtime=io.containerd.runc.v2 Dec 13 02:20:31.184941 env[1567]: time="2024-12-13T02:20:31.184878923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:31.185307 env[1567]: time="2024-12-13T02:20:31.184927983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:31.185307 env[1567]: time="2024-12-13T02:20:31.184943437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:31.185440 env[1567]: time="2024-12-13T02:20:31.185326111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2 pid=1978 runtime=io.containerd.runc.v2 Dec 13 02:20:31.209924 systemd[1]: Started cri-containerd-8ce6df2ff9665f00b1dc126ff5dc8db61cd78bb18a93e576bae1178097247eae.scope. Dec 13 02:20:31.231913 systemd[1]: Started cri-containerd-9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2.scope. Dec 13 02:20:31.261909 kubelet[1915]: E1213 02:20:31.261859 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:31.274555 env[1567]: time="2024-12-13T02:20:31.274508603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7w78,Uid:7d3d824e-0359-40f4-89e1-b38936043682,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ce6df2ff9665f00b1dc126ff5dc8db61cd78bb18a93e576bae1178097247eae\"" Dec 13 02:20:31.279352 env[1567]: time="2024-12-13T02:20:31.279307297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:20:31.293009 env[1567]: time="2024-12-13T02:20:31.292956276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bcdr,Uid:e06e8fbc-4902-40d9-a183-dd08630570d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\"" Dec 13 02:20:31.423333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491037215.mount: Deactivated successfully. Dec 13 02:20:32.262660 kubelet[1915]: E1213 02:20:32.262550 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:32.576507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090575209.mount: Deactivated successfully. Dec 13 02:20:33.263750 kubelet[1915]: E1213 02:20:33.263692 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:33.274908 env[1567]: time="2024-12-13T02:20:33.274851780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:33.277156 env[1567]: time="2024-12-13T02:20:33.277114064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:33.278769 env[1567]: time="2024-12-13T02:20:33.278728284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:33.280335 env[1567]: time="2024-12-13T02:20:33.280254075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:33.281454 env[1567]: time="2024-12-13T02:20:33.281288798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:20:33.283841 env[1567]: time="2024-12-13T02:20:33.283806216Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:20:33.285662 env[1567]: time="2024-12-13T02:20:33.285491506Z" level=info msg="CreateContainer within sandbox \"8ce6df2ff9665f00b1dc126ff5dc8db61cd78bb18a93e576bae1178097247eae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:20:33.307955 env[1567]: time="2024-12-13T02:20:33.307420631Z" level=info msg="CreateContainer within sandbox \"8ce6df2ff9665f00b1dc126ff5dc8db61cd78bb18a93e576bae1178097247eae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b46a49b2c2913bc56dfa893a5cda3c13ff12c894f464dd1cef83b97945f119d8\"" Dec 13 02:20:33.309407 env[1567]: time="2024-12-13T02:20:33.309369377Z" level=info msg="StartContainer for \"b46a49b2c2913bc56dfa893a5cda3c13ff12c894f464dd1cef83b97945f119d8\"" Dec 13 02:20:33.341912 systemd[1]: Started cri-containerd-b46a49b2c2913bc56dfa893a5cda3c13ff12c894f464dd1cef83b97945f119d8.scope. Dec 13 02:20:33.384812 env[1567]: time="2024-12-13T02:20:33.383433544Z" level=info msg="StartContainer for \"b46a49b2c2913bc56dfa893a5cda3c13ff12c894f464dd1cef83b97945f119d8\" returns successfully" Dec 13 02:20:33.576991 systemd[1]: run-containerd-runc-k8s.io-b46a49b2c2913bc56dfa893a5cda3c13ff12c894f464dd1cef83b97945f119d8-runc.Cb29Lm.mount: Deactivated successfully. Dec 13 02:20:34.264567 kubelet[1915]: E1213 02:20:34.264526 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:35.266164 kubelet[1915]: E1213 02:20:35.265697 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:36.267057 kubelet[1915]: E1213 02:20:36.266941 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:37.267896 kubelet[1915]: E1213 02:20:37.267772 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:38.268166 kubelet[1915]: E1213 02:20:38.268094 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:39.268848 kubelet[1915]: E1213 02:20:39.268744 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:39.830155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578224779.mount: Deactivated successfully. Dec 13 02:20:40.269689 kubelet[1915]: E1213 02:20:40.269273 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:41.269865 kubelet[1915]: E1213 02:20:41.269781 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:42.270157 kubelet[1915]: E1213 02:20:42.270118 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:42.966223 env[1567]: time="2024-12-13T02:20:42.966169800Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:42.970482 env[1567]: time="2024-12-13T02:20:42.970432761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:42.973241 env[1567]: time="2024-12-13T02:20:42.973176272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:42.973984 env[1567]: time="2024-12-13T02:20:42.973945198Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:20:42.976687 env[1567]: time="2024-12-13T02:20:42.976651838Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:20:42.995730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount903572481.mount: Deactivated successfully. Dec 13 02:20:43.004886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085687027.mount: Deactivated successfully. Dec 13 02:20:43.016411 env[1567]: time="2024-12-13T02:20:43.016351696Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\"" Dec 13 02:20:43.017176 env[1567]: time="2024-12-13T02:20:43.017140935Z" level=info msg="StartContainer for \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\"" Dec 13 02:20:43.042897 systemd[1]: Started cri-containerd-576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f.scope. Dec 13 02:20:43.085174 env[1567]: time="2024-12-13T02:20:43.085114532Z" level=info msg="StartContainer for \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\" returns successfully" Dec 13 02:20:43.094719 systemd[1]: cri-containerd-576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f.scope: Deactivated successfully. Dec 13 02:20:43.242303 env[1567]: time="2024-12-13T02:20:43.241687534Z" level=info msg="shim disconnected" id=576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f Dec 13 02:20:43.242744 env[1567]: time="2024-12-13T02:20:43.242716044Z" level=warning msg="cleaning up after shim disconnected" id=576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f namespace=k8s.io Dec 13 02:20:43.242870 env[1567]: time="2024-12-13T02:20:43.242852032Z" level=info msg="cleaning up dead shim" Dec 13 02:20:43.252380 env[1567]: time="2024-12-13T02:20:43.252319563Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2252 runtime=io.containerd.runc.v2\n" Dec 13 02:20:43.270453 kubelet[1915]: E1213 02:20:43.270400 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:43.484302 env[1567]: time="2024-12-13T02:20:43.484247898Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:20:43.502633 kubelet[1915]: I1213 02:20:43.502355 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k7w78" podStartSLOduration=13.496694901 podStartE2EDuration="15.502298525s" podCreationTimestamp="2024-12-13 02:20:28 +0000 UTC" firstStartedPulling="2024-12-13 02:20:31.276860479 +0000 UTC m=+3.668515481" lastFinishedPulling="2024-12-13 02:20:33.282464109 +0000 UTC m=+5.674119105" observedRunningTime="2024-12-13 02:20:33.467520211 +0000 UTC m=+5.859175215" watchObservedRunningTime="2024-12-13 02:20:43.502298525 +0000 UTC m=+15.893953548" Dec 13 02:20:43.515889 env[1567]: time="2024-12-13T02:20:43.515828243Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\"" Dec 13 02:20:43.516595 env[1567]: time="2024-12-13T02:20:43.516556640Z" level=info msg="StartContainer for \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\"" Dec 13 02:20:43.537148 systemd[1]: Started cri-containerd-bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7.scope. Dec 13 02:20:43.578659 env[1567]: time="2024-12-13T02:20:43.578603248Z" level=info msg="StartContainer for \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\" returns successfully" Dec 13 02:20:43.594655 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:20:43.595369 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:20:43.595704 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:20:43.605111 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:20:43.613583 systemd[1]: cri-containerd-bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7.scope: Deactivated successfully. Dec 13 02:20:43.638102 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:20:43.662523 env[1567]: time="2024-12-13T02:20:43.662464458Z" level=info msg="shim disconnected" id=bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7 Dec 13 02:20:43.662523 env[1567]: time="2024-12-13T02:20:43.662522155Z" level=warning msg="cleaning up after shim disconnected" id=bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7 namespace=k8s.io Dec 13 02:20:43.662898 env[1567]: time="2024-12-13T02:20:43.662587693Z" level=info msg="cleaning up dead shim" Dec 13 02:20:43.674512 env[1567]: time="2024-12-13T02:20:43.674468453Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2319 runtime=io.containerd.runc.v2\n" Dec 13 02:20:43.991680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f-rootfs.mount: Deactivated successfully. Dec 13 02:20:44.271551 kubelet[1915]: E1213 02:20:44.271424 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:44.487026 env[1567]: time="2024-12-13T02:20:44.486975119Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:20:44.515520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649463395.mount: Deactivated successfully. Dec 13 02:20:44.529888 env[1567]: time="2024-12-13T02:20:44.529735739Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\"" Dec 13 02:20:44.530915 env[1567]: time="2024-12-13T02:20:44.530872841Z" level=info msg="StartContainer for \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\"" Dec 13 02:20:44.554773 systemd[1]: Started cri-containerd-dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a.scope. Dec 13 02:20:44.602012 systemd[1]: cri-containerd-dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a.scope: Deactivated successfully. Dec 13 02:20:44.603049 env[1567]: time="2024-12-13T02:20:44.603000600Z" level=info msg="StartContainer for \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\" returns successfully" Dec 13 02:20:44.642533 env[1567]: time="2024-12-13T02:20:44.642468680Z" level=info msg="shim disconnected" id=dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a Dec 13 02:20:44.642533 env[1567]: time="2024-12-13T02:20:44.642531184Z" level=warning msg="cleaning up after shim disconnected" id=dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a namespace=k8s.io Dec 13 02:20:44.642901 env[1567]: time="2024-12-13T02:20:44.642546835Z" level=info msg="cleaning up dead shim" Dec 13 02:20:44.652003 env[1567]: time="2024-12-13T02:20:44.651955964Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2380 runtime=io.containerd.runc.v2\n" Dec 13 02:20:44.985018 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:20:44.992844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a-rootfs.mount: Deactivated successfully. Dec 13 02:20:45.272660 kubelet[1915]: E1213 02:20:45.272516 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:45.492326 env[1567]: time="2024-12-13T02:20:45.492279968Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:20:45.518938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440814320.mount: Deactivated successfully. Dec 13 02:20:45.529437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605955113.mount: Deactivated successfully. Dec 13 02:20:45.540043 env[1567]: time="2024-12-13T02:20:45.539991922Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\"" Dec 13 02:20:45.541224 env[1567]: time="2024-12-13T02:20:45.541184274Z" level=info msg="StartContainer for \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\"" Dec 13 02:20:45.574454 systemd[1]: Started cri-containerd-00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15.scope. Dec 13 02:20:45.665145 env[1567]: time="2024-12-13T02:20:45.665086558Z" level=info msg="StartContainer for \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\" returns successfully" Dec 13 02:20:45.666676 systemd[1]: cri-containerd-00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15.scope: Deactivated successfully. Dec 13 02:20:45.697766 env[1567]: time="2024-12-13T02:20:45.697709893Z" level=info msg="shim disconnected" id=00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15 Dec 13 02:20:45.697766 env[1567]: time="2024-12-13T02:20:45.697762053Z" level=warning msg="cleaning up after shim disconnected" id=00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15 namespace=k8s.io Dec 13 02:20:45.698174 env[1567]: time="2024-12-13T02:20:45.697774675Z" level=info msg="cleaning up dead shim" Dec 13 02:20:45.708655 env[1567]: time="2024-12-13T02:20:45.708605122Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2437 runtime=io.containerd.runc.v2\n" Dec 13 02:20:46.273267 kubelet[1915]: E1213 02:20:46.273208 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:46.499427 env[1567]: time="2024-12-13T02:20:46.499381156Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:20:46.520955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675270164.mount: Deactivated successfully. Dec 13 02:20:46.536387 env[1567]: time="2024-12-13T02:20:46.535903088Z" level=info msg="CreateContainer within sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\"" Dec 13 02:20:46.536747 env[1567]: time="2024-12-13T02:20:46.536716517Z" level=info msg="StartContainer for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\"" Dec 13 02:20:46.558517 systemd[1]: Started cri-containerd-233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410.scope. Dec 13 02:20:46.600816 env[1567]: time="2024-12-13T02:20:46.599378259Z" level=info msg="StartContainer for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" returns successfully" Dec 13 02:20:46.733915 kubelet[1915]: I1213 02:20:46.733886 1915 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:20:47.091449 kernel: Initializing XFRM netlink socket Dec 13 02:20:47.273928 kubelet[1915]: E1213 02:20:47.273872 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:48.259254 kubelet[1915]: E1213 02:20:48.259184 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:48.274548 kubelet[1915]: E1213 02:20:48.274501 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:48.821415 (udev-worker)[2543]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:48.821451 (udev-worker)[2544]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:48.823658 systemd-networkd[1289]: cilium_host: Link UP Dec 13 02:20:48.825157 systemd-networkd[1289]: cilium_net: Link UP Dec 13 02:20:48.828450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:20:48.828607 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:20:48.829993 systemd-networkd[1289]: cilium_net: Gained carrier Dec 13 02:20:48.830562 systemd-networkd[1289]: cilium_host: Gained carrier Dec 13 02:20:49.067019 (udev-worker)[2590]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:49.081189 systemd-networkd[1289]: cilium_vxlan: Link UP Dec 13 02:20:49.081204 systemd-networkd[1289]: cilium_vxlan: Gained carrier Dec 13 02:20:49.274991 kubelet[1915]: E1213 02:20:49.274944 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:49.419832 kernel: NET: Registered PF_ALG protocol family Dec 13 02:20:49.564891 systemd-networkd[1289]: cilium_net: Gained IPv6LL Dec 13 02:20:49.691967 systemd-networkd[1289]: cilium_host: Gained IPv6LL Dec 13 02:20:50.275333 kubelet[1915]: E1213 02:20:50.275258 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:50.332053 systemd-networkd[1289]: cilium_vxlan: Gained IPv6LL Dec 13 02:20:50.472059 systemd-networkd[1289]: lxc_health: Link UP Dec 13 02:20:50.481137 systemd-networkd[1289]: lxc_health: Gained carrier Dec 13 02:20:50.481807 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:20:50.617648 kubelet[1915]: I1213 02:20:50.617556 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7bcdr" podStartSLOduration=10.938174906 podStartE2EDuration="22.617498732s" podCreationTimestamp="2024-12-13 02:20:28 +0000 UTC" firstStartedPulling="2024-12-13 02:20:31.295025103 +0000 UTC m=+3.686680091" lastFinishedPulling="2024-12-13 02:20:42.974348931 +0000 UTC m=+15.366003917" observedRunningTime="2024-12-13 02:20:47.522692703 +0000 UTC m=+19.914347709" watchObservedRunningTime="2024-12-13 02:20:50.617498732 +0000 UTC m=+23.009153734" Dec 13 02:20:51.271309 amazon-ssm-agent[1555]: 2024-12-13 02:20:51 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:20:51.276009 kubelet[1915]: E1213 02:20:51.275969 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:51.740033 systemd-networkd[1289]: lxc_health: Gained IPv6LL Dec 13 02:20:52.277145 kubelet[1915]: E1213 02:20:52.277055 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:52.501364 kubelet[1915]: I1213 02:20:52.501291 1915 topology_manager.go:215] "Topology Admit Handler" podUID="16c991f8-3b85-46fc-9452-24c6e5997bf6" podNamespace="default" podName="nginx-deployment-6d5f899847-j8db7" Dec 13 02:20:52.510945 systemd[1]: Created slice kubepods-besteffort-pod16c991f8_3b85_46fc_9452_24c6e5997bf6.slice. Dec 13 02:20:52.625209 kubelet[1915]: I1213 02:20:52.624725 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56xqz\" (UniqueName: \"kubernetes.io/projected/16c991f8-3b85-46fc-9452-24c6e5997bf6-kube-api-access-56xqz\") pod \"nginx-deployment-6d5f899847-j8db7\" (UID: \"16c991f8-3b85-46fc-9452-24c6e5997bf6\") " pod="default/nginx-deployment-6d5f899847-j8db7" Dec 13 02:20:52.816860 env[1567]: time="2024-12-13T02:20:52.816384391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-j8db7,Uid:16c991f8-3b85-46fc-9452-24c6e5997bf6,Namespace:default,Attempt:0,}" Dec 13 02:20:52.927938 systemd-networkd[1289]: lxc01ce2dadfc7c: Link UP Dec 13 02:20:52.936907 kernel: eth0: renamed from tmp1684c Dec 13 02:20:52.942768 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:20:52.943967 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc01ce2dadfc7c: link becomes ready Dec 13 02:20:52.942985 systemd-networkd[1289]: lxc01ce2dadfc7c: Gained carrier Dec 13 02:20:53.277430 kubelet[1915]: E1213 02:20:53.277239 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:54.044095 systemd-networkd[1289]: lxc01ce2dadfc7c: Gained IPv6LL Dec 13 02:20:54.278180 kubelet[1915]: E1213 02:20:54.278128 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:55.278769 kubelet[1915]: E1213 02:20:55.278724 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:56.280250 kubelet[1915]: E1213 02:20:56.280207 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:56.379548 env[1567]: time="2024-12-13T02:20:56.379463199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:56.379548 env[1567]: time="2024-12-13T02:20:56.379523944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:56.380200 env[1567]: time="2024-12-13T02:20:56.380153906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:56.380770 env[1567]: time="2024-12-13T02:20:56.380697842Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1684c6b8a636b82b825a520fcdc3fc95b35018917d8cf68e23cb3cff557fbbcf pid=2956 runtime=io.containerd.runc.v2 Dec 13 02:20:56.414472 systemd[1]: Started cri-containerd-1684c6b8a636b82b825a520fcdc3fc95b35018917d8cf68e23cb3cff557fbbcf.scope. Dec 13 02:20:56.472285 env[1567]: time="2024-12-13T02:20:56.472212005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-j8db7,Uid:16c991f8-3b85-46fc-9452-24c6e5997bf6,Namespace:default,Attempt:0,} returns sandbox id \"1684c6b8a636b82b825a520fcdc3fc95b35018917d8cf68e23cb3cff557fbbcf\"" Dec 13 02:20:56.474637 env[1567]: time="2024-12-13T02:20:56.474392097Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:20:57.282136 kubelet[1915]: E1213 02:20:57.282084 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:58.282533 kubelet[1915]: E1213 02:20:58.282422 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:59.283403 kubelet[1915]: E1213 02:20:59.283344 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:20:59.730880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466020214.mount: Deactivated successfully. Dec 13 02:21:00.101181 update_engine[1551]: I1213 02:21:00.098876 1551 update_attempter.cc:509] Updating boot flags... Dec 13 02:21:00.283761 kubelet[1915]: E1213 02:21:00.283700 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:01.287369 kubelet[1915]: E1213 02:21:01.287291 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:01.833619 env[1567]: time="2024-12-13T02:21:01.833562193Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:01.837995 env[1567]: time="2024-12-13T02:21:01.837945516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:01.841265 env[1567]: time="2024-12-13T02:21:01.841219880Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:01.852886 env[1567]: time="2024-12-13T02:21:01.852834794Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:01.853931 env[1567]: time="2024-12-13T02:21:01.853887732Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:21:01.856445 env[1567]: time="2024-12-13T02:21:01.856403460Z" level=info msg="CreateContainer within sandbox \"1684c6b8a636b82b825a520fcdc3fc95b35018917d8cf68e23cb3cff557fbbcf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:21:01.876638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319805097.mount: Deactivated successfully. Dec 13 02:21:01.885652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120528106.mount: Deactivated successfully. Dec 13 02:21:01.892483 env[1567]: time="2024-12-13T02:21:01.892429791Z" level=info msg="CreateContainer within sandbox \"1684c6b8a636b82b825a520fcdc3fc95b35018917d8cf68e23cb3cff557fbbcf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7bac3bb85061f0651cc7eb02cc5f44364a7fc8135224a4b2dcc021262e1c4959\"" Dec 13 02:21:01.893223 env[1567]: time="2024-12-13T02:21:01.893181477Z" level=info msg="StartContainer for \"7bac3bb85061f0651cc7eb02cc5f44364a7fc8135224a4b2dcc021262e1c4959\"" Dec 13 02:21:01.922355 systemd[1]: Started cri-containerd-7bac3bb85061f0651cc7eb02cc5f44364a7fc8135224a4b2dcc021262e1c4959.scope. Dec 13 02:21:01.965344 env[1567]: time="2024-12-13T02:21:01.965296309Z" level=info msg="StartContainer for \"7bac3bb85061f0651cc7eb02cc5f44364a7fc8135224a4b2dcc021262e1c4959\" returns successfully" Dec 13 02:21:02.287657 kubelet[1915]: E1213 02:21:02.287601 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:02.576198 kubelet[1915]: I1213 02:21:02.576040 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-j8db7" podStartSLOduration=5.19535748 podStartE2EDuration="10.575994043s" podCreationTimestamp="2024-12-13 02:20:52 +0000 UTC" firstStartedPulling="2024-12-13 02:20:56.473601228 +0000 UTC m=+28.865256215" lastFinishedPulling="2024-12-13 02:21:01.854237781 +0000 UTC m=+34.245892778" observedRunningTime="2024-12-13 02:21:02.57503401 +0000 UTC m=+34.966689016" watchObservedRunningTime="2024-12-13 02:21:02.575994043 +0000 UTC m=+34.967649050" Dec 13 02:21:03.288233 kubelet[1915]: E1213 02:21:03.288181 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:04.288665 kubelet[1915]: E1213 02:21:04.288611 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:05.289003 kubelet[1915]: E1213 02:21:05.288950 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:06.289414 kubelet[1915]: E1213 02:21:06.289361 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:07.290389 kubelet[1915]: E1213 02:21:07.290335 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:07.782458 kubelet[1915]: I1213 02:21:07.782413 1915 topology_manager.go:215] "Topology Admit Handler" podUID="2118f52b-061d-491a-b41b-9adc86332567" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:21:07.787821 systemd[1]: Created slice kubepods-besteffort-pod2118f52b_061d_491a_b41b_9adc86332567.slice. Dec 13 02:21:07.845435 kubelet[1915]: I1213 02:21:07.845382 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2118f52b-061d-491a-b41b-9adc86332567-data\") pod \"nfs-server-provisioner-0\" (UID: \"2118f52b-061d-491a-b41b-9adc86332567\") " pod="default/nfs-server-provisioner-0" Dec 13 02:21:07.845435 kubelet[1915]: I1213 02:21:07.845438 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh27b\" (UniqueName: \"kubernetes.io/projected/2118f52b-061d-491a-b41b-9adc86332567-kube-api-access-kh27b\") pod \"nfs-server-provisioner-0\" (UID: \"2118f52b-061d-491a-b41b-9adc86332567\") " pod="default/nfs-server-provisioner-0" Dec 13 02:21:08.092441 env[1567]: time="2024-12-13T02:21:08.092077547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2118f52b-061d-491a-b41b-9adc86332567,Namespace:default,Attempt:0,}" Dec 13 02:21:08.177001 (udev-worker)[3150]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:08.178961 systemd-networkd[1289]: lxc13d2d18e2907: Link UP Dec 13 02:21:08.183974 kernel: eth0: renamed from tmp67254 Dec 13 02:21:08.188138 (udev-worker)[3165]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:08.193448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:21:08.193632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc13d2d18e2907: link becomes ready Dec 13 02:21:08.191674 systemd-networkd[1289]: lxc13d2d18e2907: Gained carrier Dec 13 02:21:08.259455 kubelet[1915]: E1213 02:21:08.259413 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:08.291639 kubelet[1915]: E1213 02:21:08.291481 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:08.459175 env[1567]: time="2024-12-13T02:21:08.459080656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:08.459175 env[1567]: time="2024-12-13T02:21:08.459129026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:08.459465 env[1567]: time="2024-12-13T02:21:08.459145557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:08.459465 env[1567]: time="2024-12-13T02:21:08.459333553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67254bf928d60872fdf4f8557d40b2214ddde2eeb6ec7b38e4bdf0112c265419 pid=3181 runtime=io.containerd.runc.v2 Dec 13 02:21:08.486979 systemd[1]: run-containerd-runc-k8s.io-67254bf928d60872fdf4f8557d40b2214ddde2eeb6ec7b38e4bdf0112c265419-runc.NFrJlA.mount: Deactivated successfully. Dec 13 02:21:08.509127 systemd[1]: Started cri-containerd-67254bf928d60872fdf4f8557d40b2214ddde2eeb6ec7b38e4bdf0112c265419.scope. Dec 13 02:21:08.564127 env[1567]: time="2024-12-13T02:21:08.564081991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2118f52b-061d-491a-b41b-9adc86332567,Namespace:default,Attempt:0,} returns sandbox id \"67254bf928d60872fdf4f8557d40b2214ddde2eeb6ec7b38e4bdf0112c265419\"" Dec 13 02:21:08.566356 env[1567]: time="2024-12-13T02:21:08.566309932Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:21:09.292142 kubelet[1915]: E1213 02:21:09.292022 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:09.477240 systemd-networkd[1289]: lxc13d2d18e2907: Gained IPv6LL Dec 13 02:21:10.292203 kubelet[1915]: E1213 02:21:10.292162 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:11.293133 kubelet[1915]: E1213 02:21:11.293090 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:11.780744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023697601.mount: Deactivated successfully. Dec 13 02:21:12.294215 kubelet[1915]: E1213 02:21:12.294139 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:13.294922 kubelet[1915]: E1213 02:21:13.294861 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:14.295704 kubelet[1915]: E1213 02:21:14.295567 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:14.358486 env[1567]: time="2024-12-13T02:21:14.358430857Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:14.363490 env[1567]: time="2024-12-13T02:21:14.363362870Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:14.367518 env[1567]: time="2024-12-13T02:21:14.367468161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:14.371574 env[1567]: time="2024-12-13T02:21:14.371524075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:14.372830 env[1567]: time="2024-12-13T02:21:14.372765476Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:21:14.376739 env[1567]: time="2024-12-13T02:21:14.376657592Z" level=info msg="CreateContainer within sandbox \"67254bf928d60872fdf4f8557d40b2214ddde2eeb6ec7b38e4bdf0112c265419\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:21:14.396713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362355025.mount: Deactivated successfully. Dec 13 02:21:14.406043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964159886.mount: Deactivated successfully. Dec 13 02:21:14.419009 env[1567]: time="2024-12-13T02:21:14.418929486Z" level=info msg="CreateContainer within sandbox \"67254bf928d60872fdf4f8557d40b2214ddde2eeb6ec7b38e4bdf0112c265419\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"00e9f08c396a5655495c9812a9bd7ee21fccdfda1dcf0ac14eb857684bd60f06\"" Dec 13 02:21:14.421157 env[1567]: time="2024-12-13T02:21:14.421109777Z" level=info msg="StartContainer for \"00e9f08c396a5655495c9812a9bd7ee21fccdfda1dcf0ac14eb857684bd60f06\"" Dec 13 02:21:14.464623 systemd[1]: Started cri-containerd-00e9f08c396a5655495c9812a9bd7ee21fccdfda1dcf0ac14eb857684bd60f06.scope. Dec 13 02:21:14.507226 env[1567]: time="2024-12-13T02:21:14.507118139Z" level=info msg="StartContainer for \"00e9f08c396a5655495c9812a9bd7ee21fccdfda1dcf0ac14eb857684bd60f06\" returns successfully" Dec 13 02:21:14.589147 kubelet[1915]: I1213 02:21:14.589031 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.7817873199999998 podStartE2EDuration="7.588977786s" podCreationTimestamp="2024-12-13 02:21:07 +0000 UTC" firstStartedPulling="2024-12-13 02:21:08.565883397 +0000 UTC m=+40.957538388" lastFinishedPulling="2024-12-13 02:21:14.373073868 +0000 UTC m=+46.764728854" observedRunningTime="2024-12-13 02:21:14.587987176 +0000 UTC m=+46.979642179" watchObservedRunningTime="2024-12-13 02:21:14.588977786 +0000 UTC m=+46.980632792" Dec 13 02:21:15.296778 kubelet[1915]: E1213 02:21:15.296725 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:16.297697 kubelet[1915]: E1213 02:21:16.297643 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:17.298408 kubelet[1915]: E1213 02:21:17.298357 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:18.298858 kubelet[1915]: E1213 02:21:18.298777 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:19.299524 kubelet[1915]: E1213 02:21:19.299485 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:20.301239 kubelet[1915]: E1213 02:21:20.301034 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:21.301417 kubelet[1915]: E1213 02:21:21.301364 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:22.301653 kubelet[1915]: E1213 02:21:22.301499 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:23.302913 kubelet[1915]: E1213 02:21:23.302850 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:23.912702 kubelet[1915]: I1213 02:21:23.912656 1915 topology_manager.go:215] "Topology Admit Handler" podUID="e1457d75-6891-49ab-8c62-656abfd53e63" podNamespace="default" podName="test-pod-1" Dec 13 02:21:23.922248 systemd[1]: Created slice kubepods-besteffort-pode1457d75_6891_49ab_8c62_656abfd53e63.slice. Dec 13 02:21:24.054077 kubelet[1915]: I1213 02:21:24.054037 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7sl\" (UniqueName: \"kubernetes.io/projected/e1457d75-6891-49ab-8c62-656abfd53e63-kube-api-access-xs7sl\") pod \"test-pod-1\" (UID: \"e1457d75-6891-49ab-8c62-656abfd53e63\") " pod="default/test-pod-1" Dec 13 02:21:24.054273 kubelet[1915]: I1213 02:21:24.054142 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28d3a687-e8e0-4238-a7bf-afbf646c8ebe\" (UniqueName: \"kubernetes.io/nfs/e1457d75-6891-49ab-8c62-656abfd53e63-pvc-28d3a687-e8e0-4238-a7bf-afbf646c8ebe\") pod \"test-pod-1\" (UID: \"e1457d75-6891-49ab-8c62-656abfd53e63\") " pod="default/test-pod-1" Dec 13 02:21:24.250908 kernel: FS-Cache: Loaded Dec 13 02:21:24.305194 kubelet[1915]: E1213 02:21:24.305150 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:24.317286 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:21:24.317409 kernel: RPC: Registered udp transport module. Dec 13 02:21:24.317434 kernel: RPC: Registered tcp transport module. Dec 13 02:21:24.317455 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:21:24.425515 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:21:24.752952 kernel: NFS: Registering the id_resolver key type Dec 13 02:21:24.753138 kernel: Key type id_resolver registered Dec 13 02:21:24.755178 kernel: Key type id_legacy registered Dec 13 02:21:24.815810 nfsidmap[3329]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 02:21:24.834832 nfsidmap[3330]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 02:21:25.129104 env[1567]: time="2024-12-13T02:21:25.129060794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e1457d75-6891-49ab-8c62-656abfd53e63,Namespace:default,Attempt:0,}" Dec 13 02:21:25.176579 systemd-networkd[1289]: lxcb341aabf6597: Link UP Dec 13 02:21:25.180275 (udev-worker)[3325]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:25.183834 kernel: eth0: renamed from tmp175dc Dec 13 02:21:25.191359 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:21:25.191497 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb341aabf6597: link becomes ready Dec 13 02:21:25.191767 systemd-networkd[1289]: lxcb341aabf6597: Gained carrier Dec 13 02:21:25.307873 kubelet[1915]: E1213 02:21:25.307766 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:25.385951 env[1567]: time="2024-12-13T02:21:25.385755331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:25.386129 env[1567]: time="2024-12-13T02:21:25.385830085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:25.386129 env[1567]: time="2024-12-13T02:21:25.385846325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:25.386632 env[1567]: time="2024-12-13T02:21:25.386337017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/175dc2538cf67233d9eeba1fe7e4ee6a7a0ba7b8475aa87d9783561dc345f52a pid=3355 runtime=io.containerd.runc.v2 Dec 13 02:21:25.433885 systemd[1]: run-containerd-runc-k8s.io-175dc2538cf67233d9eeba1fe7e4ee6a7a0ba7b8475aa87d9783561dc345f52a-runc.2ImimY.mount: Deactivated successfully. Dec 13 02:21:25.452233 systemd[1]: Started cri-containerd-175dc2538cf67233d9eeba1fe7e4ee6a7a0ba7b8475aa87d9783561dc345f52a.scope. Dec 13 02:21:25.539570 env[1567]: time="2024-12-13T02:21:25.539521057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e1457d75-6891-49ab-8c62-656abfd53e63,Namespace:default,Attempt:0,} returns sandbox id \"175dc2538cf67233d9eeba1fe7e4ee6a7a0ba7b8475aa87d9783561dc345f52a\"" Dec 13 02:21:25.541425 env[1567]: time="2024-12-13T02:21:25.541384635Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:21:25.851189 env[1567]: time="2024-12-13T02:21:25.850721484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:25.854916 env[1567]: time="2024-12-13T02:21:25.854867002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:25.858222 env[1567]: time="2024-12-13T02:21:25.858177153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:25.861238 env[1567]: time="2024-12-13T02:21:25.861188776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:25.862054 env[1567]: time="2024-12-13T02:21:25.862012871Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:21:25.866241 env[1567]: time="2024-12-13T02:21:25.866202577Z" level=info msg="CreateContainer within sandbox \"175dc2538cf67233d9eeba1fe7e4ee6a7a0ba7b8475aa87d9783561dc345f52a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:21:25.898758 env[1567]: time="2024-12-13T02:21:25.898631721Z" level=info msg="CreateContainer within sandbox \"175dc2538cf67233d9eeba1fe7e4ee6a7a0ba7b8475aa87d9783561dc345f52a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ecc71eee937b2899540b95d73dc2a5ef9a7960158d44390ff711cfb45dcfc8c4\"" Dec 13 02:21:25.899332 env[1567]: time="2024-12-13T02:21:25.899252724Z" level=info msg="StartContainer for \"ecc71eee937b2899540b95d73dc2a5ef9a7960158d44390ff711cfb45dcfc8c4\"" Dec 13 02:21:25.937857 systemd[1]: Started cri-containerd-ecc71eee937b2899540b95d73dc2a5ef9a7960158d44390ff711cfb45dcfc8c4.scope. Dec 13 02:21:25.990682 env[1567]: time="2024-12-13T02:21:25.988862193Z" level=info msg="StartContainer for \"ecc71eee937b2899540b95d73dc2a5ef9a7960158d44390ff711cfb45dcfc8c4\" returns successfully" Dec 13 02:21:26.308571 kubelet[1915]: E1213 02:21:26.308516 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:26.364196 systemd-networkd[1289]: lxcb341aabf6597: Gained IPv6LL Dec 13 02:21:27.308947 kubelet[1915]: E1213 02:21:27.308871 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:28.259498 kubelet[1915]: E1213 02:21:28.259444 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:28.309717 kubelet[1915]: E1213 02:21:28.309629 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:29.309937 kubelet[1915]: E1213 02:21:29.309888 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:30.310937 kubelet[1915]: E1213 02:21:30.310890 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:31.311878 kubelet[1915]: E1213 02:21:31.311825 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:32.312719 kubelet[1915]: E1213 02:21:32.312677 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:33.312917 kubelet[1915]: E1213 02:21:33.312858 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:33.492809 kubelet[1915]: I1213 02:21:33.492764 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.171123967 podStartE2EDuration="25.492713975s" podCreationTimestamp="2024-12-13 02:21:08 +0000 UTC" firstStartedPulling="2024-12-13 02:21:25.540817484 +0000 UTC m=+57.932472477" lastFinishedPulling="2024-12-13 02:21:25.862407485 +0000 UTC m=+58.254062485" observedRunningTime="2024-12-13 02:21:26.613918174 +0000 UTC m=+59.005573178" watchObservedRunningTime="2024-12-13 02:21:33.492713975 +0000 UTC m=+65.884368978" Dec 13 02:21:33.519417 systemd[1]: run-containerd-runc-k8s.io-233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410-runc.RRAFLL.mount: Deactivated successfully. Dec 13 02:21:33.553648 env[1567]: time="2024-12-13T02:21:33.553512278Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:21:33.560698 env[1567]: time="2024-12-13T02:21:33.560658064Z" level=info msg="StopContainer for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" with timeout 2 (s)" Dec 13 02:21:33.561016 env[1567]: time="2024-12-13T02:21:33.560967164Z" level=info msg="Stop container \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" with signal terminated" Dec 13 02:21:33.572926 systemd-networkd[1289]: lxc_health: Link DOWN Dec 13 02:21:33.573039 systemd-networkd[1289]: lxc_health: Lost carrier Dec 13 02:21:33.736507 systemd[1]: cri-containerd-233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410.scope: Deactivated successfully. Dec 13 02:21:33.737805 systemd[1]: cri-containerd-233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410.scope: Consumed 7.556s CPU time. Dec 13 02:21:33.779933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410-rootfs.mount: Deactivated successfully. Dec 13 02:21:33.904666 env[1567]: time="2024-12-13T02:21:33.904483180Z" level=info msg="shim disconnected" id=233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410 Dec 13 02:21:33.904666 env[1567]: time="2024-12-13T02:21:33.904655501Z" level=warning msg="cleaning up after shim disconnected" id=233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410 namespace=k8s.io Dec 13 02:21:33.904666 env[1567]: time="2024-12-13T02:21:33.904674962Z" level=info msg="cleaning up dead shim" Dec 13 02:21:33.920982 env[1567]: time="2024-12-13T02:21:33.920925445Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3492 runtime=io.containerd.runc.v2\n" Dec 13 02:21:33.925094 env[1567]: time="2024-12-13T02:21:33.925039269Z" level=info msg="StopContainer for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" returns successfully" Dec 13 02:21:33.925919 env[1567]: time="2024-12-13T02:21:33.925880804Z" level=info msg="StopPodSandbox for \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\"" Dec 13 02:21:33.926055 env[1567]: time="2024-12-13T02:21:33.925954068Z" level=info msg="Container to stop \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:33.926055 env[1567]: time="2024-12-13T02:21:33.925975118Z" level=info msg="Container to stop \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:33.926055 env[1567]: time="2024-12-13T02:21:33.925990935Z" level=info msg="Container to stop \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:33.926055 env[1567]: time="2024-12-13T02:21:33.926008007Z" level=info msg="Container to stop \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:33.926055 env[1567]: time="2024-12-13T02:21:33.926023881Z" level=info msg="Container to stop \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:33.929639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2-shm.mount: Deactivated successfully. Dec 13 02:21:33.939099 systemd[1]: cri-containerd-9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2.scope: Deactivated successfully. Dec 13 02:21:33.974418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2-rootfs.mount: Deactivated successfully. Dec 13 02:21:33.984484 env[1567]: time="2024-12-13T02:21:33.984423784Z" level=info msg="shim disconnected" id=9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2 Dec 13 02:21:33.984726 env[1567]: time="2024-12-13T02:21:33.984486444Z" level=warning msg="cleaning up after shim disconnected" id=9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2 namespace=k8s.io Dec 13 02:21:33.984726 env[1567]: time="2024-12-13T02:21:33.984499050Z" level=info msg="cleaning up dead shim" Dec 13 02:21:33.995485 env[1567]: time="2024-12-13T02:21:33.995426561Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3524 runtime=io.containerd.runc.v2\n" Dec 13 02:21:33.996316 env[1567]: time="2024-12-13T02:21:33.996270854Z" level=info msg="TearDown network for sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" successfully" Dec 13 02:21:33.996316 env[1567]: time="2024-12-13T02:21:33.996311191Z" level=info msg="StopPodSandbox for \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" returns successfully" Dec 13 02:21:34.120891 kubelet[1915]: I1213 02:21:34.120747 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e06e8fbc-4902-40d9-a183-dd08630570d2-clustermesh-secrets\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121210 kubelet[1915]: I1213 02:21:34.121192 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-run\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121337 kubelet[1915]: I1213 02:21:34.121324 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-lib-modules\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121439 kubelet[1915]: I1213 02:21:34.121429 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-bpf-maps\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121546 kubelet[1915]: I1213 02:21:34.121534 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cni-path\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121627 kubelet[1915]: I1213 02:21:34.121617 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-config-path\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121714 kubelet[1915]: I1213 02:21:34.121686 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-net\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121806 kubelet[1915]: I1213 02:21:34.121724 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-kernel\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121806 kubelet[1915]: I1213 02:21:34.121757 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-hubble-tls\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121806 kubelet[1915]: I1213 02:21:34.121800 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-xtables-lock\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121969 kubelet[1915]: I1213 02:21:34.121830 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-hostproc\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121969 kubelet[1915]: I1213 02:21:34.121860 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-cgroup\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121969 kubelet[1915]: I1213 02:21:34.121896 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvvdn\" (UniqueName: \"kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-kube-api-access-fvvdn\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.121969 kubelet[1915]: I1213 02:21:34.121922 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-etc-cni-netd\") pod \"e06e8fbc-4902-40d9-a183-dd08630570d2\" (UID: \"e06e8fbc-4902-40d9-a183-dd08630570d2\") " Dec 13 02:21:34.122136 kubelet[1915]: I1213 02:21:34.122099 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.122185 kubelet[1915]: I1213 02:21:34.122147 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.122185 kubelet[1915]: I1213 02:21:34.122172 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.122287 kubelet[1915]: I1213 02:21:34.122195 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.122287 kubelet[1915]: I1213 02:21:34.122218 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.123012 kubelet[1915]: I1213 02:21:34.122596 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.123012 kubelet[1915]: I1213 02:21:34.122656 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.123012 kubelet[1915]: I1213 02:21:34.122683 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.125269 kubelet[1915]: I1213 02:21:34.125230 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:21:34.125370 kubelet[1915]: I1213 02:21:34.125307 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.125370 kubelet[1915]: I1213 02:21:34.125338 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.130616 kubelet[1915]: I1213 02:21:34.130572 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-kube-api-access-fvvdn" (OuterVolumeSpecName: "kube-api-access-fvvdn") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "kube-api-access-fvvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:34.131027 kubelet[1915]: I1213 02:21:34.130994 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06e8fbc-4902-40d9-a183-dd08630570d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:21:34.132435 kubelet[1915]: I1213 02:21:34.132409 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e06e8fbc-4902-40d9-a183-dd08630570d2" (UID: "e06e8fbc-4902-40d9-a183-dd08630570d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.222945 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-run\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.222989 1915 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-lib-modules\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.223006 1915 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e06e8fbc-4902-40d9-a183-dd08630570d2-clustermesh-secrets\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.223020 1915 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cni-path\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.223036 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-config-path\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.223049 1915 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-bpf-maps\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.223062 1915 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-xtables-lock\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223217 kubelet[1915]: I1213 02:21:34.223075 1915 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-hostproc\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223698 kubelet[1915]: I1213 02:21:34.223088 1915 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-net\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223698 kubelet[1915]: I1213 02:21:34.223103 1915 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-host-proc-sys-kernel\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223698 kubelet[1915]: I1213 02:21:34.223117 1915 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-hubble-tls\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223698 kubelet[1915]: I1213 02:21:34.223130 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-cilium-cgroup\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223698 kubelet[1915]: I1213 02:21:34.223143 1915 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fvvdn\" (UniqueName: \"kubernetes.io/projected/e06e8fbc-4902-40d9-a183-dd08630570d2-kube-api-access-fvvdn\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.223698 kubelet[1915]: I1213 02:21:34.223156 1915 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e06e8fbc-4902-40d9-a183-dd08630570d2-etc-cni-netd\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:34.313738 kubelet[1915]: E1213 02:21:34.313564 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:34.440656 systemd[1]: Removed slice kubepods-burstable-pode06e8fbc_4902_40d9_a183_dd08630570d2.slice. Dec 13 02:21:34.440813 systemd[1]: kubepods-burstable-pode06e8fbc_4902_40d9_a183_dd08630570d2.slice: Consumed 7.685s CPU time. Dec 13 02:21:34.512358 systemd[1]: var-lib-kubelet-pods-e06e8fbc\x2d4902\x2d40d9\x2da183\x2ddd08630570d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvvdn.mount: Deactivated successfully. Dec 13 02:21:34.512482 systemd[1]: var-lib-kubelet-pods-e06e8fbc\x2d4902\x2d40d9\x2da183\x2ddd08630570d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:21:34.512584 systemd[1]: var-lib-kubelet-pods-e06e8fbc\x2d4902\x2d40d9\x2da183\x2ddd08630570d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:21:34.648461 kubelet[1915]: I1213 02:21:34.648433 1915 scope.go:117] "RemoveContainer" containerID="233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410" Dec 13 02:21:34.658539 env[1567]: time="2024-12-13T02:21:34.658491318Z" level=info msg="RemoveContainer for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\"" Dec 13 02:21:34.664889 env[1567]: time="2024-12-13T02:21:34.664840645Z" level=info msg="RemoveContainer for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" returns successfully" Dec 13 02:21:34.665290 kubelet[1915]: I1213 02:21:34.665258 1915 scope.go:117] "RemoveContainer" containerID="00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15" Dec 13 02:21:34.668001 env[1567]: time="2024-12-13T02:21:34.667963693Z" level=info msg="RemoveContainer for \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\"" Dec 13 02:21:34.673248 env[1567]: time="2024-12-13T02:21:34.673200359Z" level=info msg="RemoveContainer for \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\" returns successfully" Dec 13 02:21:34.673492 kubelet[1915]: I1213 02:21:34.673439 1915 scope.go:117] "RemoveContainer" containerID="dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a" Dec 13 02:21:34.676371 env[1567]: time="2024-12-13T02:21:34.676288509Z" level=info msg="RemoveContainer for \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\"" Dec 13 02:21:34.683406 env[1567]: time="2024-12-13T02:21:34.683357655Z" level=info msg="RemoveContainer for \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\" returns successfully" Dec 13 02:21:34.683639 kubelet[1915]: I1213 02:21:34.683613 1915 scope.go:117] "RemoveContainer" containerID="bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7" Dec 13 02:21:34.685661 env[1567]: time="2024-12-13T02:21:34.685622020Z" level=info msg="RemoveContainer for \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\"" Dec 13 02:21:34.691644 env[1567]: time="2024-12-13T02:21:34.691593632Z" level=info msg="RemoveContainer for \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\" returns successfully" Dec 13 02:21:34.692402 kubelet[1915]: I1213 02:21:34.692323 1915 scope.go:117] "RemoveContainer" containerID="576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f" Dec 13 02:21:34.694358 env[1567]: time="2024-12-13T02:21:34.694317967Z" level=info msg="RemoveContainer for \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\"" Dec 13 02:21:34.699782 env[1567]: time="2024-12-13T02:21:34.699733625Z" level=info msg="RemoveContainer for \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\" returns successfully" Dec 13 02:21:34.700022 kubelet[1915]: I1213 02:21:34.699995 1915 scope.go:117] "RemoveContainer" containerID="233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410" Dec 13 02:21:34.700460 env[1567]: time="2024-12-13T02:21:34.700321879Z" level=error msg="ContainerStatus for \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\": not found" Dec 13 02:21:34.700614 kubelet[1915]: E1213 02:21:34.700592 1915 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\": not found" containerID="233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410" Dec 13 02:21:34.700732 kubelet[1915]: I1213 02:21:34.700711 1915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410"} err="failed to get container status \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\": rpc error: code = NotFound desc = an error occurred when try to find container \"233c485ff8c262febfb3a56ef9102c5f62fef16149bfa9718a77beeb89e50410\": not found" Dec 13 02:21:34.700821 kubelet[1915]: I1213 02:21:34.700735 1915 scope.go:117] "RemoveContainer" containerID="00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15" Dec 13 02:21:34.701004 env[1567]: time="2024-12-13T02:21:34.700948280Z" level=error msg="ContainerStatus for \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\": not found" Dec 13 02:21:34.701162 kubelet[1915]: E1213 02:21:34.701135 1915 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\": not found" containerID="00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15" Dec 13 02:21:34.701235 kubelet[1915]: I1213 02:21:34.701176 1915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15"} err="failed to get container status \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\": rpc error: code = NotFound desc = an error occurred when try to find container \"00fbf20486e7458341835b4d155e70b3aee148a6901c7a0ac6c35f9b7898ae15\": not found" Dec 13 02:21:34.701235 kubelet[1915]: I1213 02:21:34.701189 1915 scope.go:117] "RemoveContainer" containerID="dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a" Dec 13 02:21:34.701428 env[1567]: time="2024-12-13T02:21:34.701369129Z" level=error msg="ContainerStatus for \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\": not found" Dec 13 02:21:34.701543 kubelet[1915]: E1213 02:21:34.701522 1915 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\": not found" containerID="dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a" Dec 13 02:21:34.701620 kubelet[1915]: I1213 02:21:34.701559 1915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a"} err="failed to get container status \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbc6a7b670c5c155ac528c90ddd7038edbfda6d56dc980b874bfdc21f443645a\": not found" Dec 13 02:21:34.701620 kubelet[1915]: I1213 02:21:34.701572 1915 scope.go:117] "RemoveContainer" containerID="bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7" Dec 13 02:21:34.701947 env[1567]: time="2024-12-13T02:21:34.701898402Z" level=error msg="ContainerStatus for \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\": not found" Dec 13 02:21:34.702064 kubelet[1915]: E1213 02:21:34.702043 1915 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\": not found" containerID="bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7" Dec 13 02:21:34.702133 kubelet[1915]: I1213 02:21:34.702077 1915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7"} err="failed to get container status \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd5f41aadd4e24106a7738ff1e4f5974936c5f59526c3b1dd3d93144b1456cd7\": not found" Dec 13 02:21:34.702133 kubelet[1915]: I1213 02:21:34.702090 1915 scope.go:117] "RemoveContainer" containerID="576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f" Dec 13 02:21:34.702449 env[1567]: time="2024-12-13T02:21:34.702402235Z" level=error msg="ContainerStatus for \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\": not found" Dec 13 02:21:34.702569 kubelet[1915]: E1213 02:21:34.702549 1915 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\": not found" containerID="576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f" Dec 13 02:21:34.702648 kubelet[1915]: I1213 02:21:34.702581 1915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f"} err="failed to get container status \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"576a833051bd39c8083ce4e2aea59131fa4e4bb825371a3a2485bd082b7f2e4f\": not found" Dec 13 02:21:35.314718 kubelet[1915]: E1213 02:21:35.314667 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:36.314891 kubelet[1915]: E1213 02:21:36.314842 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:36.431952 kubelet[1915]: I1213 02:21:36.431914 1915 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" path="/var/lib/kubelet/pods/e06e8fbc-4902-40d9-a183-dd08630570d2/volumes" Dec 13 02:21:36.463779 kubelet[1915]: I1213 02:21:36.463737 1915 topology_manager.go:215] "Topology Admit Handler" podUID="f4fb34e5-38e5-444b-bcb2-4a26358f633a" podNamespace="kube-system" podName="cilium-operator-5cc964979-r64h8" Dec 13 02:21:36.463966 kubelet[1915]: E1213 02:21:36.463818 1915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" containerName="mount-cgroup" Dec 13 02:21:36.463966 kubelet[1915]: E1213 02:21:36.463833 1915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" containerName="mount-bpf-fs" Dec 13 02:21:36.463966 kubelet[1915]: E1213 02:21:36.463842 1915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" containerName="cilium-agent" Dec 13 02:21:36.463966 kubelet[1915]: E1213 02:21:36.463851 1915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" containerName="apply-sysctl-overwrites" Dec 13 02:21:36.463966 kubelet[1915]: E1213 02:21:36.463861 1915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" containerName="clean-cilium-state" Dec 13 02:21:36.463966 kubelet[1915]: I1213 02:21:36.463886 1915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06e8fbc-4902-40d9-a183-dd08630570d2" containerName="cilium-agent" Dec 13 02:21:36.476297 systemd[1]: Created slice kubepods-besteffort-podf4fb34e5_38e5_444b_bcb2_4a26358f633a.slice. Dec 13 02:21:36.499711 kubelet[1915]: I1213 02:21:36.499670 1915 topology_manager.go:215] "Topology Admit Handler" podUID="5dfb9273-2941-48e8-ac2e-95f487ba530b" podNamespace="kube-system" podName="cilium-464xb" Dec 13 02:21:36.517020 systemd[1]: Created slice kubepods-burstable-pod5dfb9273_2941_48e8_ac2e_95f487ba530b.slice. Dec 13 02:21:36.542778 kubelet[1915]: I1213 02:21:36.542744 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-net\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.542977 kubelet[1915]: I1213 02:21:36.542810 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-xtables-lock\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.542977 kubelet[1915]: I1213 02:21:36.542840 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll6mm\" (UniqueName: \"kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-kube-api-access-ll6mm\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.542977 kubelet[1915]: I1213 02:21:36.542871 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-run\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.542977 kubelet[1915]: I1213 02:21:36.542901 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-lib-modules\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.542977 kubelet[1915]: I1213 02:21:36.542929 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-config-path\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.542977 kubelet[1915]: I1213 02:21:36.542958 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cni-path\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543248 kubelet[1915]: I1213 02:21:36.542984 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-hubble-tls\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543248 kubelet[1915]: I1213 02:21:36.543019 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgz7\" (UniqueName: \"kubernetes.io/projected/f4fb34e5-38e5-444b-bcb2-4a26358f633a-kube-api-access-llgz7\") pod \"cilium-operator-5cc964979-r64h8\" (UID: \"f4fb34e5-38e5-444b-bcb2-4a26358f633a\") " pod="kube-system/cilium-operator-5cc964979-r64h8" Dec 13 02:21:36.543248 kubelet[1915]: I1213 02:21:36.543055 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4fb34e5-38e5-444b-bcb2-4a26358f633a-cilium-config-path\") pod \"cilium-operator-5cc964979-r64h8\" (UID: \"f4fb34e5-38e5-444b-bcb2-4a26358f633a\") " pod="kube-system/cilium-operator-5cc964979-r64h8" Dec 13 02:21:36.543248 kubelet[1915]: I1213 02:21:36.543085 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-etc-cni-netd\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543248 kubelet[1915]: I1213 02:21:36.543115 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-clustermesh-secrets\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543453 kubelet[1915]: I1213 02:21:36.543149 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-ipsec-secrets\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543453 kubelet[1915]: I1213 02:21:36.543179 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-hostproc\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543453 kubelet[1915]: I1213 02:21:36.543211 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-cgroup\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543453 kubelet[1915]: I1213 02:21:36.543240 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-bpf-maps\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.543453 kubelet[1915]: I1213 02:21:36.543272 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-kernel\") pod \"cilium-464xb\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " pod="kube-system/cilium-464xb" Dec 13 02:21:36.792118 env[1567]: time="2024-12-13T02:21:36.792066451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r64h8,Uid:f4fb34e5-38e5-444b-bcb2-4a26358f633a,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:36.819107 env[1567]: time="2024-12-13T02:21:36.819014960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:36.819107 env[1567]: time="2024-12-13T02:21:36.819055979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:36.819505 env[1567]: time="2024-12-13T02:21:36.819071170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:36.819505 env[1567]: time="2024-12-13T02:21:36.819275039Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09fe80accc0b499aa456bf111c6d5b5865cf6229effc0c849e7deabf8f516233 pid=3552 runtime=io.containerd.runc.v2 Dec 13 02:21:36.830347 env[1567]: time="2024-12-13T02:21:36.830294673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-464xb,Uid:5dfb9273-2941-48e8-ac2e-95f487ba530b,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:36.839427 systemd[1]: Started cri-containerd-09fe80accc0b499aa456bf111c6d5b5865cf6229effc0c849e7deabf8f516233.scope. Dec 13 02:21:36.868723 env[1567]: time="2024-12-13T02:21:36.868632744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:36.868723 env[1567]: time="2024-12-13T02:21:36.868683715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:36.869029 env[1567]: time="2024-12-13T02:21:36.868708807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:36.869399 env[1567]: time="2024-12-13T02:21:36.869338768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16 pid=3588 runtime=io.containerd.runc.v2 Dec 13 02:21:36.883378 systemd[1]: Started cri-containerd-a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16.scope. Dec 13 02:21:36.921117 env[1567]: time="2024-12-13T02:21:36.921065642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r64h8,Uid:f4fb34e5-38e5-444b-bcb2-4a26358f633a,Namespace:kube-system,Attempt:0,} returns sandbox id \"09fe80accc0b499aa456bf111c6d5b5865cf6229effc0c849e7deabf8f516233\"" Dec 13 02:21:36.923585 env[1567]: time="2024-12-13T02:21:36.923541316Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:21:36.928754 env[1567]: time="2024-12-13T02:21:36.928630001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-464xb,Uid:5dfb9273-2941-48e8-ac2e-95f487ba530b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\"" Dec 13 02:21:36.931746 env[1567]: time="2024-12-13T02:21:36.931709765Z" level=info msg="CreateContainer within sandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:21:36.955749 env[1567]: time="2024-12-13T02:21:36.955705653Z" level=info msg="CreateContainer within sandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\"" Dec 13 02:21:36.956512 env[1567]: time="2024-12-13T02:21:36.956486399Z" level=info msg="StartContainer for \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\"" Dec 13 02:21:36.974043 systemd[1]: Started cri-containerd-04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8.scope. Dec 13 02:21:36.986757 systemd[1]: cri-containerd-04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8.scope: Deactivated successfully. Dec 13 02:21:37.007478 env[1567]: time="2024-12-13T02:21:37.007423092Z" level=info msg="shim disconnected" id=04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8 Dec 13 02:21:37.007478 env[1567]: time="2024-12-13T02:21:37.007476046Z" level=warning msg="cleaning up after shim disconnected" id=04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8 namespace=k8s.io Dec 13 02:21:37.007809 env[1567]: time="2024-12-13T02:21:37.007488057Z" level=info msg="cleaning up dead shim" Dec 13 02:21:37.015826 env[1567]: time="2024-12-13T02:21:37.015760973Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3653 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:21:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:21:37.016303 env[1567]: time="2024-12-13T02:21:37.016189376Z" level=error msg="copy shim log" error="read /proc/self/fd/66: file already closed" Dec 13 02:21:37.016911 env[1567]: time="2024-12-13T02:21:37.016436175Z" level=error msg="Failed to pipe stdout of container \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\"" error="reading from a closed fifo" Dec 13 02:21:37.017067 env[1567]: time="2024-12-13T02:21:37.016882486Z" level=error msg="Failed to pipe stderr of container \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\"" error="reading from a closed fifo" Dec 13 02:21:37.020357 env[1567]: time="2024-12-13T02:21:37.020287370Z" level=error msg="StartContainer for \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:21:37.020633 kubelet[1915]: E1213 02:21:37.020597 1915 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8" Dec 13 02:21:37.020770 kubelet[1915]: E1213 02:21:37.020755 1915 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:21:37.020770 kubelet[1915]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:21:37.020770 kubelet[1915]: rm /hostbin/cilium-mount Dec 13 02:21:37.020957 kubelet[1915]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ll6mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-464xb_kube-system(5dfb9273-2941-48e8-ac2e-95f487ba530b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:21:37.020957 kubelet[1915]: E1213 02:21:37.020830 1915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-464xb" podUID="5dfb9273-2941-48e8-ac2e-95f487ba530b" Dec 13 02:21:37.315569 kubelet[1915]: E1213 02:21:37.315491 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:37.666834 env[1567]: time="2024-12-13T02:21:37.666755664Z" level=info msg="StopPodSandbox for \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\"" Dec 13 02:21:37.669517 env[1567]: time="2024-12-13T02:21:37.666864548Z" level=info msg="Container to stop \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:37.669476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16-shm.mount: Deactivated successfully. Dec 13 02:21:37.677939 systemd[1]: cri-containerd-a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16.scope: Deactivated successfully. Dec 13 02:21:37.700127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16-rootfs.mount: Deactivated successfully. Dec 13 02:21:37.712223 env[1567]: time="2024-12-13T02:21:37.712168375Z" level=info msg="shim disconnected" id=a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16 Dec 13 02:21:37.712934 env[1567]: time="2024-12-13T02:21:37.712903377Z" level=warning msg="cleaning up after shim disconnected" id=a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16 namespace=k8s.io Dec 13 02:21:37.713095 env[1567]: time="2024-12-13T02:21:37.713047321Z" level=info msg="cleaning up dead shim" Dec 13 02:21:37.722209 env[1567]: time="2024-12-13T02:21:37.722163426Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3685 runtime=io.containerd.runc.v2\n" Dec 13 02:21:37.722738 env[1567]: time="2024-12-13T02:21:37.722697557Z" level=info msg="TearDown network for sandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" successfully" Dec 13 02:21:37.722738 env[1567]: time="2024-12-13T02:21:37.722734264Z" level=info msg="StopPodSandbox for \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" returns successfully" Dec 13 02:21:37.866104 kubelet[1915]: I1213 02:21:37.866067 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-etc-cni-netd\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866129 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-clustermesh-secrets\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866158 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-ipsec-secrets\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866180 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-bpf-maps\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866203 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-xtables-lock\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866231 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll6mm\" (UniqueName: \"kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-kube-api-access-ll6mm\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866258 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cni-path\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866284 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-net\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866314 kubelet[1915]: I1213 02:21:37.866316 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-config-path\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866352 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-hubble-tls\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866397 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-cgroup\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866425 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-run\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866455 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-hostproc\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866482 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-lib-modules\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866511 1915 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-kernel\") pod \"5dfb9273-2941-48e8-ac2e-95f487ba530b\" (UID: \"5dfb9273-2941-48e8-ac2e-95f487ba530b\") " Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866595 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.866681 kubelet[1915]: I1213 02:21:37.866630 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.868818 kubelet[1915]: I1213 02:21:37.867105 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.872416 systemd[1]: var-lib-kubelet-pods-5dfb9273\x2d2941\x2d48e8\x2dac2e\x2d95f487ba530b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:21:37.873815 kubelet[1915]: I1213 02:21:37.873762 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.873923 kubelet[1915]: I1213 02:21:37.873836 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.874548 kubelet[1915]: I1213 02:21:37.874520 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cni-path" (OuterVolumeSpecName: "cni-path") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.874890 kubelet[1915]: I1213 02:21:37.874846 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:21:37.875193 kubelet[1915]: I1213 02:21:37.875171 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.875337 kubelet[1915]: I1213 02:21:37.875319 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.875492 kubelet[1915]: I1213 02:21:37.875474 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-hostproc" (OuterVolumeSpecName: "hostproc") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.875623 kubelet[1915]: I1213 02:21:37.875606 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:37.875852 kubelet[1915]: I1213 02:21:37.875832 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:21:37.880308 systemd[1]: var-lib-kubelet-pods-5dfb9273\x2d2941\x2d48e8\x2dac2e\x2d95f487ba530b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:21:37.881986 kubelet[1915]: I1213 02:21:37.881948 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:37.883266 kubelet[1915]: I1213 02:21:37.883231 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-kube-api-access-ll6mm" (OuterVolumeSpecName: "kube-api-access-ll6mm") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "kube-api-access-ll6mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:37.884627 kubelet[1915]: I1213 02:21:37.884599 1915 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5dfb9273-2941-48e8-ac2e-95f487ba530b" (UID: "5dfb9273-2941-48e8-ac2e-95f487ba530b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967137 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-run\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967182 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-cgroup\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967222 1915 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-lib-modules\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967239 1915 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-hostproc\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967256 1915 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-kernel\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967273 1915 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-clustermesh-secrets\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967309 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-ipsec-secrets\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967327 1915 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-bpf-maps\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967343 1915 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-xtables-lock\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967378 1915 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ll6mm\" (UniqueName: \"kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-kube-api-access-ll6mm\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967394 1915 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-etc-cni-netd\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967411 1915 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-cni-path\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967426 1915 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5dfb9273-2941-48e8-ac2e-95f487ba530b-host-proc-sys-net\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967460 1915 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5dfb9273-2941-48e8-ac2e-95f487ba530b-cilium-config-path\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:37.970895 kubelet[1915]: I1213 02:21:37.967477 1915 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5dfb9273-2941-48e8-ac2e-95f487ba530b-hubble-tls\") on node \"172.31.22.41\" DevicePath \"\"" Dec 13 02:21:38.315843 kubelet[1915]: E1213 02:21:38.315685 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:38.386080 kubelet[1915]: E1213 02:21:38.386045 1915 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:21:38.436566 systemd[1]: Removed slice kubepods-burstable-pod5dfb9273_2941_48e8_ac2e_95f487ba530b.slice. Dec 13 02:21:38.661369 systemd[1]: var-lib-kubelet-pods-5dfb9273\x2d2941\x2d48e8\x2dac2e\x2d95f487ba530b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dll6mm.mount: Deactivated successfully. Dec 13 02:21:38.661751 systemd[1]: var-lib-kubelet-pods-5dfb9273\x2d2941\x2d48e8\x2dac2e\x2d95f487ba530b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:21:38.676007 kubelet[1915]: I1213 02:21:38.675671 1915 scope.go:117] "RemoveContainer" containerID="04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8" Dec 13 02:21:38.699341 env[1567]: time="2024-12-13T02:21:38.697055577Z" level=info msg="RemoveContainer for \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\"" Dec 13 02:21:38.725897 env[1567]: time="2024-12-13T02:21:38.725843781Z" level=info msg="RemoveContainer for \"04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8\" returns successfully" Dec 13 02:21:38.753624 kubelet[1915]: I1213 02:21:38.753589 1915 topology_manager.go:215] "Topology Admit Handler" podUID="4294120e-d2af-4076-9ba4-a819edb421da" podNamespace="kube-system" podName="cilium-k7vw9" Dec 13 02:21:38.753624 kubelet[1915]: E1213 02:21:38.753648 1915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5dfb9273-2941-48e8-ac2e-95f487ba530b" containerName="mount-cgroup" Dec 13 02:21:38.758235 kubelet[1915]: I1213 02:21:38.758051 1915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfb9273-2941-48e8-ac2e-95f487ba530b" containerName="mount-cgroup" Dec 13 02:21:38.776522 systemd[1]: Created slice kubepods-burstable-pod4294120e_d2af_4076_9ba4_a819edb421da.slice. Dec 13 02:21:38.876420 kubelet[1915]: I1213 02:21:38.876380 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4294120e-d2af-4076-9ba4-a819edb421da-cilium-config-path\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.876672 kubelet[1915]: I1213 02:21:38.876656 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4294120e-d2af-4076-9ba4-a819edb421da-hubble-tls\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.876817 kubelet[1915]: I1213 02:21:38.876781 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-xtables-lock\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.876940 kubelet[1915]: I1213 02:21:38.876923 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4294120e-d2af-4076-9ba4-a819edb421da-clustermesh-secrets\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877016 kubelet[1915]: I1213 02:21:38.876965 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-cni-path\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877016 kubelet[1915]: I1213 02:21:38.876997 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-lib-modules\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877136 kubelet[1915]: I1213 02:21:38.877030 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-etc-cni-netd\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877136 kubelet[1915]: I1213 02:21:38.877084 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-host-proc-sys-kernel\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877136 kubelet[1915]: I1213 02:21:38.877115 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slbfv\" (UniqueName: \"kubernetes.io/projected/4294120e-d2af-4076-9ba4-a819edb421da-kube-api-access-slbfv\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877267 kubelet[1915]: I1213 02:21:38.877150 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-host-proc-sys-net\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877267 kubelet[1915]: I1213 02:21:38.877181 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-cilium-cgroup\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877267 kubelet[1915]: I1213 02:21:38.877212 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4294120e-d2af-4076-9ba4-a819edb421da-cilium-ipsec-secrets\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877267 kubelet[1915]: I1213 02:21:38.877245 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-bpf-maps\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877450 kubelet[1915]: I1213 02:21:38.877276 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-hostproc\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:38.877450 kubelet[1915]: I1213 02:21:38.877315 1915 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4294120e-d2af-4076-9ba4-a819edb421da-cilium-run\") pod \"cilium-k7vw9\" (UID: \"4294120e-d2af-4076-9ba4-a819edb421da\") " pod="kube-system/cilium-k7vw9" Dec 13 02:21:39.316184 kubelet[1915]: E1213 02:21:39.316138 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:39.392602 env[1567]: time="2024-12-13T02:21:39.392538293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7vw9,Uid:4294120e-d2af-4076-9ba4-a819edb421da,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:39.422556 env[1567]: time="2024-12-13T02:21:39.422397414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:39.422717 env[1567]: time="2024-12-13T02:21:39.422575934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:39.422717 env[1567]: time="2024-12-13T02:21:39.422636078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:39.423048 env[1567]: time="2024-12-13T02:21:39.422993740Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372 pid=3715 runtime=io.containerd.runc.v2 Dec 13 02:21:39.457243 systemd[1]: Started cri-containerd-5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372.scope. Dec 13 02:21:39.518074 env[1567]: time="2024-12-13T02:21:39.518026752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7vw9,Uid:4294120e-d2af-4076-9ba4-a819edb421da,Namespace:kube-system,Attempt:0,} returns sandbox id \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\"" Dec 13 02:21:39.530356 env[1567]: time="2024-12-13T02:21:39.530320940Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:21:39.542924 kubelet[1915]: I1213 02:21:39.540831 1915 setters.go:568] "Node became not ready" node="172.31.22.41" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:21:39Z","lastTransitionTime":"2024-12-13T02:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:21:39.559193 env[1567]: time="2024-12-13T02:21:39.559148567Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156\"" Dec 13 02:21:39.560585 env[1567]: time="2024-12-13T02:21:39.560561019Z" level=info msg="StartContainer for \"b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156\"" Dec 13 02:21:39.590181 systemd[1]: Started cri-containerd-b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156.scope. Dec 13 02:21:39.635163 env[1567]: time="2024-12-13T02:21:39.634047882Z" level=info msg="StartContainer for \"b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156\" returns successfully" Dec 13 02:21:39.677290 systemd[1]: cri-containerd-b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156.scope: Deactivated successfully. Dec 13 02:21:39.716867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156-rootfs.mount: Deactivated successfully. Dec 13 02:21:39.754323 env[1567]: time="2024-12-13T02:21:39.754261645Z" level=info msg="shim disconnected" id=b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156 Dec 13 02:21:39.754323 env[1567]: time="2024-12-13T02:21:39.754322214Z" level=warning msg="cleaning up after shim disconnected" id=b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156 namespace=k8s.io Dec 13 02:21:39.754323 env[1567]: time="2024-12-13T02:21:39.754335951Z" level=info msg="cleaning up dead shim" Dec 13 02:21:39.781804 env[1567]: time="2024-12-13T02:21:39.781732239Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799 runtime=io.containerd.runc.v2\n" Dec 13 02:21:40.119142 kubelet[1915]: W1213 02:21:40.119087 1915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dfb9273_2941_48e8_ac2e_95f487ba530b.slice/cri-containerd-04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8.scope WatchSource:0}: container "04c56cb22ec8ce584d1a8951dba9f899607848e02ae38c36cfd23edfda9c66f8" in namespace "k8s.io": not found Dec 13 02:21:40.317072 kubelet[1915]: E1213 02:21:40.316963 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:40.432424 kubelet[1915]: I1213 02:21:40.432137 1915 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5dfb9273-2941-48e8-ac2e-95f487ba530b" path="/var/lib/kubelet/pods/5dfb9273-2941-48e8-ac2e-95f487ba530b/volumes" Dec 13 02:21:40.708425 env[1567]: time="2024-12-13T02:21:40.708108139Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:21:40.733894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78674925.mount: Deactivated successfully. Dec 13 02:21:40.745337 env[1567]: time="2024-12-13T02:21:40.745286227Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8\"" Dec 13 02:21:40.747509 env[1567]: time="2024-12-13T02:21:40.747368609Z" level=info msg="StartContainer for \"dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8\"" Dec 13 02:21:40.791141 systemd[1]: Started cri-containerd-dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8.scope. Dec 13 02:21:40.834197 env[1567]: time="2024-12-13T02:21:40.834109705Z" level=info msg="StartContainer for \"dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8\" returns successfully" Dec 13 02:21:40.845321 systemd[1]: cri-containerd-dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8.scope: Deactivated successfully. Dec 13 02:21:40.868871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8-rootfs.mount: Deactivated successfully. Dec 13 02:21:40.892954 env[1567]: time="2024-12-13T02:21:40.892899604Z" level=info msg="shim disconnected" id=dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8 Dec 13 02:21:40.892954 env[1567]: time="2024-12-13T02:21:40.892953491Z" level=warning msg="cleaning up after shim disconnected" id=dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8 namespace=k8s.io Dec 13 02:21:40.893397 env[1567]: time="2024-12-13T02:21:40.892965290Z" level=info msg="cleaning up dead shim" Dec 13 02:21:40.903306 env[1567]: time="2024-12-13T02:21:40.903251724Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" Dec 13 02:21:41.317429 kubelet[1915]: E1213 02:21:41.317393 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:41.703014 env[1567]: time="2024-12-13T02:21:41.702962129Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:21:41.729040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720623691.mount: Deactivated successfully. Dec 13 02:21:41.749198 env[1567]: time="2024-12-13T02:21:41.749144502Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff\"" Dec 13 02:21:41.750033 env[1567]: time="2024-12-13T02:21:41.749958506Z" level=info msg="StartContainer for \"5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff\"" Dec 13 02:21:41.786272 systemd[1]: Started cri-containerd-5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff.scope. Dec 13 02:21:41.862280 env[1567]: time="2024-12-13T02:21:41.862219279Z" level=info msg="StartContainer for \"5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff\" returns successfully" Dec 13 02:21:41.868883 systemd[1]: cri-containerd-5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff.scope: Deactivated successfully. Dec 13 02:21:41.907964 env[1567]: time="2024-12-13T02:21:41.907910035Z" level=info msg="shim disconnected" id=5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff Dec 13 02:21:41.907964 env[1567]: time="2024-12-13T02:21:41.907962936Z" level=warning msg="cleaning up after shim disconnected" id=5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff namespace=k8s.io Dec 13 02:21:41.908279 env[1567]: time="2024-12-13T02:21:41.907975268Z" level=info msg="cleaning up dead shim" Dec 13 02:21:41.917270 env[1567]: time="2024-12-13T02:21:41.917214576Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3921 runtime=io.containerd.runc.v2\n" Dec 13 02:21:42.317844 kubelet[1915]: E1213 02:21:42.317807 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:42.708093 env[1567]: time="2024-12-13T02:21:42.708051174Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:21:42.724743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff-rootfs.mount: Deactivated successfully. Dec 13 02:21:42.753564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2475001727.mount: Deactivated successfully. Dec 13 02:21:42.778120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548127478.mount: Deactivated successfully. Dec 13 02:21:42.796672 env[1567]: time="2024-12-13T02:21:42.796613321Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a\"" Dec 13 02:21:42.797572 env[1567]: time="2024-12-13T02:21:42.797477780Z" level=info msg="StartContainer for \"1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a\"" Dec 13 02:21:42.861444 systemd[1]: Started cri-containerd-1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a.scope. Dec 13 02:21:42.914878 systemd[1]: cri-containerd-1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a.scope: Deactivated successfully. Dec 13 02:21:42.922306 env[1567]: time="2024-12-13T02:21:42.922257484Z" level=info msg="StartContainer for \"1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a\" returns successfully" Dec 13 02:21:42.923189 env[1567]: time="2024-12-13T02:21:42.918345983Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4294120e_d2af_4076_9ba4_a819edb421da.slice/cri-containerd-1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a.scope/memory.events\": no such file or directory" Dec 13 02:21:43.006702 env[1567]: time="2024-12-13T02:21:43.005960958Z" level=info msg="shim disconnected" id=1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a Dec 13 02:21:43.007062 env[1567]: time="2024-12-13T02:21:43.007034343Z" level=warning msg="cleaning up after shim disconnected" id=1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a namespace=k8s.io Dec 13 02:21:43.007185 env[1567]: time="2024-12-13T02:21:43.007167229Z" level=info msg="cleaning up dead shim" Dec 13 02:21:43.022951 env[1567]: time="2024-12-13T02:21:43.022895021Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3980 runtime=io.containerd.runc.v2\n" Dec 13 02:21:43.235844 kubelet[1915]: W1213 02:21:43.234967 1915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4294120e_d2af_4076_9ba4_a819edb421da.slice/cri-containerd-b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156.scope WatchSource:0}: task b66e26503536f5117af2349ef8e525ec2f9e4483b57fe686adcdd6ed7db5d156 not found: not found Dec 13 02:21:43.318575 kubelet[1915]: E1213 02:21:43.318079 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:43.387770 kubelet[1915]: E1213 02:21:43.387738 1915 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:21:43.684103 env[1567]: time="2024-12-13T02:21:43.684049064Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:43.688268 env[1567]: time="2024-12-13T02:21:43.688223470Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:43.691201 env[1567]: time="2024-12-13T02:21:43.691142116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:43.691882 env[1567]: time="2024-12-13T02:21:43.691835137Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:21:43.695075 env[1567]: time="2024-12-13T02:21:43.695035916Z" level=info msg="CreateContainer within sandbox \"09fe80accc0b499aa456bf111c6d5b5865cf6229effc0c849e7deabf8f516233\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:21:43.715198 env[1567]: time="2024-12-13T02:21:43.715143222Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:21:43.729397 env[1567]: time="2024-12-13T02:21:43.729304956Z" level=info msg="CreateContainer within sandbox \"09fe80accc0b499aa456bf111c6d5b5865cf6229effc0c849e7deabf8f516233\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7a18bf2c7d131a3ebd8ad21a20b39c9fa3d253ae8a87d661c85fbcb86499c967\"" Dec 13 02:21:43.730085 env[1567]: time="2024-12-13T02:21:43.730045043Z" level=info msg="StartContainer for \"7a18bf2c7d131a3ebd8ad21a20b39c9fa3d253ae8a87d661c85fbcb86499c967\"" Dec 13 02:21:43.761942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598904637.mount: Deactivated successfully. Dec 13 02:21:43.788737 env[1567]: time="2024-12-13T02:21:43.788542655Z" level=info msg="CreateContainer within sandbox \"5df01658f1566d2573fc075fdf95d4e94243cebbe680332047b00bb5fb47e372\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948\"" Dec 13 02:21:43.790012 env[1567]: time="2024-12-13T02:21:43.789974015Z" level=info msg="StartContainer for \"b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948\"" Dec 13 02:21:43.801333 systemd[1]: Started cri-containerd-7a18bf2c7d131a3ebd8ad21a20b39c9fa3d253ae8a87d661c85fbcb86499c967.scope. Dec 13 02:21:43.883294 systemd[1]: Started cri-containerd-b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948.scope. Dec 13 02:21:43.898993 env[1567]: time="2024-12-13T02:21:43.898939580Z" level=info msg="StartContainer for \"7a18bf2c7d131a3ebd8ad21a20b39c9fa3d253ae8a87d661c85fbcb86499c967\" returns successfully" Dec 13 02:21:43.954062 env[1567]: time="2024-12-13T02:21:43.953902608Z" level=info msg="StartContainer for \"b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948\" returns successfully" Dec 13 02:21:44.319906 kubelet[1915]: E1213 02:21:44.319736 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:44.742056 kubelet[1915]: I1213 02:21:44.742007 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-r64h8" podStartSLOduration=1.972650576 podStartE2EDuration="8.741936364s" podCreationTimestamp="2024-12-13 02:21:36 +0000 UTC" firstStartedPulling="2024-12-13 02:21:36.922886656 +0000 UTC m=+69.314541650" lastFinishedPulling="2024-12-13 02:21:43.692172441 +0000 UTC m=+76.083827438" observedRunningTime="2024-12-13 02:21:44.741352978 +0000 UTC m=+77.133007983" watchObservedRunningTime="2024-12-13 02:21:44.741936364 +0000 UTC m=+77.133591364" Dec 13 02:21:44.787263 kubelet[1915]: I1213 02:21:44.787217 1915 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k7vw9" podStartSLOduration=6.787150308 podStartE2EDuration="6.787150308s" podCreationTimestamp="2024-12-13 02:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:21:44.786974147 +0000 UTC m=+77.178629153" watchObservedRunningTime="2024-12-13 02:21:44.787150308 +0000 UTC m=+77.178805312" Dec 13 02:21:44.839817 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:21:45.320524 kubelet[1915]: E1213 02:21:45.320482 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:45.992013 systemd[1]: run-containerd-runc-k8s.io-b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948-runc.tWr2r8.mount: Deactivated successfully. Dec 13 02:21:46.322692 kubelet[1915]: E1213 02:21:46.322246 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:46.350026 kubelet[1915]: W1213 02:21:46.349983 1915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4294120e_d2af_4076_9ba4_a819edb421da.slice/cri-containerd-dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8.scope WatchSource:0}: task dd01c789f6bd7d45afbe519ca50c4dd7769d75883f8c73fa89e820f6757236e8 not found: not found Dec 13 02:21:47.323435 kubelet[1915]: E1213 02:21:47.323231 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:48.248548 (udev-worker)[4572]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:48.250145 (udev-worker)[4573]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:48.256120 systemd-networkd[1289]: lxc_health: Link UP Dec 13 02:21:48.259516 kubelet[1915]: E1213 02:21:48.259483 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:48.267089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:21:48.266711 systemd-networkd[1289]: lxc_health: Gained carrier Dec 13 02:21:48.324480 kubelet[1915]: E1213 02:21:48.324403 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:48.387092 systemd[1]: run-containerd-runc-k8s.io-b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948-runc.xqP9hx.mount: Deactivated successfully. Dec 13 02:21:49.327343 kubelet[1915]: E1213 02:21:49.327271 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:49.459393 kubelet[1915]: W1213 02:21:49.459355 1915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4294120e_d2af_4076_9ba4_a819edb421da.slice/cri-containerd-5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff.scope WatchSource:0}: task 5bd5b8f4c9442afb97938f024b44b004b8ffb5d333d703e6929afbb10ec2f1ff not found: not found Dec 13 02:21:49.581363 systemd-networkd[1289]: lxc_health: Gained IPv6LL Dec 13 02:21:50.328693 kubelet[1915]: E1213 02:21:50.328643 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:50.724216 systemd[1]: run-containerd-runc-k8s.io-b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948-runc.0YyjoS.mount: Deactivated successfully. Dec 13 02:21:51.329930 kubelet[1915]: E1213 02:21:51.329885 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:52.330933 kubelet[1915]: E1213 02:21:52.330884 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:52.590590 kubelet[1915]: W1213 02:21:52.590377 1915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4294120e_d2af_4076_9ba4_a819edb421da.slice/cri-containerd-1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a.scope WatchSource:0}: task 1a5365d945e7c60ee4e82ef04049e828ed79cb213eacab65c76cf93fdb39447a not found: not found Dec 13 02:21:53.111424 systemd[1]: run-containerd-runc-k8s.io-b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948-runc.qdQIzH.mount: Deactivated successfully. Dec 13 02:21:53.331459 kubelet[1915]: E1213 02:21:53.331375 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:54.332051 kubelet[1915]: E1213 02:21:54.332016 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:55.333710 kubelet[1915]: E1213 02:21:55.333662 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:55.519957 systemd[1]: run-containerd-runc-k8s.io-b8c06d641f605396f10d7a0956e50b2d5d7e99200ccc9d1e0d3c0d0a68dfb948-runc.fnEN9A.mount: Deactivated successfully. Dec 13 02:21:56.334505 kubelet[1915]: E1213 02:21:56.334452 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:57.335084 kubelet[1915]: E1213 02:21:57.335033 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:58.335411 kubelet[1915]: E1213 02:21:58.335358 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:21:59.336197 kubelet[1915]: E1213 02:21:59.336145 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:00.336930 kubelet[1915]: E1213 02:22:00.336876 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:01.337668 kubelet[1915]: E1213 02:22:01.337537 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:02.337806 kubelet[1915]: E1213 02:22:02.337744 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:03.338610 kubelet[1915]: E1213 02:22:03.338556 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:04.339549 kubelet[1915]: E1213 02:22:04.339487 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:05.339739 kubelet[1915]: E1213 02:22:05.339682 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:06.340492 kubelet[1915]: E1213 02:22:06.340434 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:07.340733 kubelet[1915]: E1213 02:22:07.340657 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:08.260199 kubelet[1915]: E1213 02:22:08.260149 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:08.341816 kubelet[1915]: E1213 02:22:08.341756 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:09.342569 kubelet[1915]: E1213 02:22:09.342520 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:09.860812 kubelet[1915]: E1213 02:22:09.860758 1915 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:21:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:21:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:21:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-12-13T02:21:59Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71035905},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\\\",\\\"registry.k8s.io/kube-proxy:v1.29.12\\\"],\\\"sizeBytes\\\":28618977},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"172.31.22.41\": Patch \"https://172.31.31.142:6443/api/v1/nodes/172.31.22.41/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:10.343459 kubelet[1915]: E1213 02:22:10.343341 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:10.753692 kubelet[1915]: E1213 02:22:10.753654 1915 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:11.344373 kubelet[1915]: E1213 02:22:11.344315 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:12.345543 kubelet[1915]: E1213 02:22:12.345487 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:13.346290 kubelet[1915]: E1213 02:22:13.346230 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:14.346613 kubelet[1915]: E1213 02:22:14.346556 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:15.347249 kubelet[1915]: E1213 02:22:15.347196 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:16.347869 kubelet[1915]: E1213 02:22:16.347822 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:17.249746 amazon-ssm-agent[1555]: 2024-12-13 02:22:17 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:22:17.348231 kubelet[1915]: E1213 02:22:17.348140 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:18.349233 kubelet[1915]: E1213 02:22:18.349179 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:19.349740 kubelet[1915]: E1213 02:22:19.349684 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:19.861255 kubelet[1915]: E1213 02:22:19.861216 1915 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.22.41\": Get \"https://172.31.31.142:6443/api/v1/nodes/172.31.22.41?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:20.350008 kubelet[1915]: E1213 02:22:20.349881 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:20.754323 kubelet[1915]: E1213 02:22:20.754280 1915 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:21.350711 kubelet[1915]: E1213 02:22:21.350655 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:22.351629 kubelet[1915]: E1213 02:22:22.351584 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:23.352014 kubelet[1915]: E1213 02:22:23.351960 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:24.352661 kubelet[1915]: E1213 02:22:24.352609 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:25.353121 kubelet[1915]: E1213 02:22:25.353067 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:26.354151 kubelet[1915]: E1213 02:22:26.354099 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:27.354690 kubelet[1915]: E1213 02:22:27.354638 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:28.260061 kubelet[1915]: E1213 02:22:28.260012 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:28.300976 env[1567]: time="2024-12-13T02:22:28.300855558Z" level=info msg="StopPodSandbox for \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\"" Dec 13 02:22:28.301700 env[1567]: time="2024-12-13T02:22:28.301051439Z" level=info msg="TearDown network for sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" successfully" Dec 13 02:22:28.301700 env[1567]: time="2024-12-13T02:22:28.301103532Z" level=info msg="StopPodSandbox for \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" returns successfully" Dec 13 02:22:28.303265 env[1567]: time="2024-12-13T02:22:28.303123440Z" level=info msg="RemovePodSandbox for \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\"" Dec 13 02:22:28.303434 env[1567]: time="2024-12-13T02:22:28.303269845Z" level=info msg="Forcibly stopping sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\"" Dec 13 02:22:28.303522 env[1567]: time="2024-12-13T02:22:28.303438085Z" level=info msg="TearDown network for sandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" successfully" Dec 13 02:22:28.311412 env[1567]: time="2024-12-13T02:22:28.311347857Z" level=info msg="RemovePodSandbox \"9070c07b41c8d291a59889b0fe963532b18d4c9b77228d2fcecb60ac3911fcd2\" returns successfully" Dec 13 02:22:28.312449 env[1567]: time="2024-12-13T02:22:28.312409256Z" level=info msg="StopPodSandbox for \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\"" Dec 13 02:22:28.312577 env[1567]: time="2024-12-13T02:22:28.312516458Z" level=info msg="TearDown network for sandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" successfully" Dec 13 02:22:28.312577 env[1567]: time="2024-12-13T02:22:28.312562103Z" level=info msg="StopPodSandbox for \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" returns successfully" Dec 13 02:22:28.313154 env[1567]: time="2024-12-13T02:22:28.313126597Z" level=info msg="RemovePodSandbox for \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\"" Dec 13 02:22:28.313251 env[1567]: time="2024-12-13T02:22:28.313155383Z" level=info msg="Forcibly stopping sandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\"" Dec 13 02:22:28.313251 env[1567]: time="2024-12-13T02:22:28.313242057Z" level=info msg="TearDown network for sandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" successfully" Dec 13 02:22:28.321428 env[1567]: time="2024-12-13T02:22:28.321375854Z" level=info msg="RemovePodSandbox \"a9b397980125c7fb51d21fb63d026cf734f43c23f351f2918e8cca2af811da16\" returns successfully" Dec 13 02:22:28.355972 kubelet[1915]: E1213 02:22:28.355922 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:29.356700 kubelet[1915]: E1213 02:22:29.356660 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:29.861743 kubelet[1915]: E1213 02:22:29.861697 1915 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.22.41\": Get \"https://172.31.31.142:6443/api/v1/nodes/172.31.22.41?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:30.357859 kubelet[1915]: E1213 02:22:30.357826 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:30.755711 kubelet[1915]: E1213 02:22:30.755420 1915 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:31.358331 kubelet[1915]: E1213 02:22:31.358279 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:32.358707 kubelet[1915]: E1213 02:22:32.358657 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:33.359839 kubelet[1915]: E1213 02:22:33.359774 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:34.360730 kubelet[1915]: E1213 02:22:34.360676 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:35.361137 kubelet[1915]: E1213 02:22:35.361085 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:36.361915 kubelet[1915]: E1213 02:22:36.361856 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:37.362807 kubelet[1915]: E1213 02:22:37.362746 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:37.514207 kubelet[1915]: E1213 02:22:37.514163 1915 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": unexpected EOF" Dec 13 02:22:37.524431 kubelet[1915]: E1213 02:22:37.524385 1915 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": read tcp 172.31.22.41:53668->172.31.31.142:6443: read: connection reset by peer" Dec 13 02:22:37.524431 kubelet[1915]: I1213 02:22:37.524429 1915 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 02:22:37.525105 kubelet[1915]: E1213 02:22:37.525071 1915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="200ms" Dec 13 02:22:37.726124 kubelet[1915]: E1213 02:22:37.726025 1915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="400ms" Dec 13 02:22:38.127734 kubelet[1915]: E1213 02:22:38.127661 1915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="800ms" Dec 13 02:22:38.363723 kubelet[1915]: E1213 02:22:38.363665 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:38.516544 kubelet[1915]: E1213 02:22:38.516426 1915 desired_state_of_world_populator.go:320] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.31.142:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.31.142:6443: connect: connection refused - error from a previous attempt: unexpected EOF" pod="default/test-pod-1" volumeName="config" Dec 13 02:22:38.516856 kubelet[1915]: E1213 02:22:38.516836 1915 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.22.41\": Get \"https://172.31.31.142:6443/api/v1/nodes/172.31.22.41?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Dec 13 02:22:38.519261 kubelet[1915]: E1213 02:22:38.519217 1915 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.22.41\": Get \"https://172.31.31.142:6443/api/v1/nodes/172.31.22.41?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" Dec 13 02:22:38.519261 kubelet[1915]: E1213 02:22:38.519248 1915 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Dec 13 02:22:39.364032 kubelet[1915]: E1213 02:22:39.363986 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:40.364571 kubelet[1915]: E1213 02:22:40.364516 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:41.364852 kubelet[1915]: E1213 02:22:41.364664 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:42.365315 kubelet[1915]: E1213 02:22:42.365267 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:43.366379 kubelet[1915]: E1213 02:22:43.366327 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:44.367102 kubelet[1915]: E1213 02:22:44.367051 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:45.368224 kubelet[1915]: E1213 02:22:45.368148 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:46.368591 kubelet[1915]: E1213 02:22:46.368530 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:47.369005 kubelet[1915]: E1213 02:22:47.368950 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:48.259988 kubelet[1915]: E1213 02:22:48.259939 1915 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:48.369682 kubelet[1915]: E1213 02:22:48.369596 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:48.928475 kubelet[1915]: E1213 02:22:48.928434 1915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Dec 13 02:22:49.369933 kubelet[1915]: E1213 02:22:49.369884 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:50.370507 kubelet[1915]: E1213 02:22:50.370457 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:51.370997 kubelet[1915]: E1213 02:22:51.370932 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:52.372323 kubelet[1915]: E1213 02:22:52.372116 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:53.373032 kubelet[1915]: E1213 02:22:53.372987 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:54.374237 kubelet[1915]: E1213 02:22:54.374181 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:55.374813 kubelet[1915]: E1213 02:22:55.374748 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:56.375641 kubelet[1915]: E1213 02:22:56.375526 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:57.376327 kubelet[1915]: E1213 02:22:57.376276 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:58.377420 kubelet[1915]: E1213 02:22:58.377368 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:58.680500 kubelet[1915]: E1213 02:22:58.680134 1915 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.22.41\": Get \"https://172.31.31.142:6443/api/v1/nodes/172.31.22.41?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 02:22:59.377812 kubelet[1915]: E1213 02:22:59.377751 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:00.378358 kubelet[1915]: E1213 02:23:00.378304 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:00.529111 kubelet[1915]: E1213 02:23:00.529012 1915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.22.41?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 13 02:23:01.378553 kubelet[1915]: E1213 02:23:01.378500 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:02.378751 kubelet[1915]: E1213 02:23:02.378699 1915 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"