Dec 13 02:22:25.204412 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:22:25.204445 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:22:25.204460 kernel: BIOS-provided physical RAM map: Dec 13 02:22:25.204471 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:22:25.204482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:22:25.204492 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:22:25.204509 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:22:25.204521 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:22:25.204533 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:22:25.204544 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:22:25.204556 kernel: NX (Execute Disable) protection: active Dec 13 02:22:25.204568 kernel: SMBIOS 2.7 present. Dec 13 02:22:25.204580 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:22:25.204592 kernel: Hypervisor detected: KVM Dec 13 02:22:25.204626 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:22:25.204636 kernel: kvm-clock: cpu 0, msr 6619b001, primary cpu clock Dec 13 02:22:25.204649 kernel: kvm-clock: using sched offset of 7346973314 cycles Dec 13 02:22:25.204662 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:22:25.204676 kernel: tsc: Detected 2499.998 MHz processor Dec 13 02:22:25.204689 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:22:25.204705 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:22:25.204718 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:22:25.204834 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:22:25.204847 kernel: Using GB pages for direct mapping Dec 13 02:22:25.204860 kernel: ACPI: Early table checksum verification disabled Dec 13 02:22:25.204903 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:22:25.204916 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:22:25.204929 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:22:25.204942 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:22:25.204958 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:22:25.204971 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:22:25.204984 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:22:25.204997 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:22:25.205010 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:22:25.205023 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:22:25.205035 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:22:25.205048 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:22:25.205064 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:22:25.205077 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:22:25.205090 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:22:25.205108 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:22:25.205122 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:22:25.205135 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:22:25.205149 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:22:25.205165 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:22:25.205179 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:22:25.205193 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:22:25.205207 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:22:25.205220 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:22:25.205234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:22:25.205248 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:22:25.205262 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:22:25.205278 kernel: Zone ranges: Dec 13 02:22:25.205292 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:22:25.205306 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:22:25.205320 kernel: Normal empty Dec 13 02:22:25.205333 kernel: Movable zone start for each node Dec 13 02:22:25.205347 kernel: Early memory node ranges Dec 13 02:22:25.205360 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:22:25.205374 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:22:25.205388 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:22:25.205404 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:22:25.205419 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:22:25.205433 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:22:25.205446 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:22:25.205459 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:22:25.205471 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:22:25.205484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:22:25.205498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:22:25.205511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:22:25.205527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:22:25.205541 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:22:25.205556 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:22:25.205569 kernel: TSC deadline timer available Dec 13 02:22:25.205584 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:22:25.205615 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:22:25.205627 kernel: Booting paravirtualized kernel on KVM Dec 13 02:22:25.205640 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:22:25.205654 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:22:25.205672 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:22:25.205685 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:22:25.205699 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:22:25.205713 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:22:25.205726 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:22:25.205740 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:22:25.205753 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:22:25.205767 kernel: Policy zone: DMA32 Dec 13 02:22:25.205783 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:22:25.205801 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:22:25.205815 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:22:25.205829 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:22:25.205851 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:22:25.205866 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:22:25.205880 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:22:25.205894 kernel: Kernel/User page tables isolation: enabled Dec 13 02:22:25.205908 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:22:25.205924 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:22:25.205939 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:22:25.205953 kernel: rcu: RCU event tracing is enabled. Dec 13 02:22:25.205968 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:22:25.205982 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:22:25.205997 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:22:25.206012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:22:25.206026 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:22:25.206040 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:22:25.206057 kernel: random: crng init done Dec 13 02:22:25.206071 kernel: Console: colour VGA+ 80x25 Dec 13 02:22:25.206085 kernel: printk: console [ttyS0] enabled Dec 13 02:22:25.206099 kernel: ACPI: Core revision 20210730 Dec 13 02:22:25.206114 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:22:25.206128 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:22:25.206142 kernel: x2apic enabled Dec 13 02:22:25.206156 kernel: Switched APIC routing to physical x2apic. Dec 13 02:22:25.206169 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 02:22:25.206186 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 02:22:25.206200 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:22:25.206214 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:22:25.206229 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:22:25.206253 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:22:25.206270 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:22:25.206285 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:22:25.206300 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:22:25.206315 kernel: RETBleed: Vulnerable Dec 13 02:22:25.206330 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:22:25.206344 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:22:25.206359 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:22:25.206373 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:22:25.206388 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:22:25.206406 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:22:25.206421 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:22:25.206436 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:22:25.206451 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:22:25.206466 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:22:25.206481 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:22:25.206498 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:22:25.206513 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:22:25.206527 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:22:25.206542 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:22:25.206557 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:22:25.206572 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:22:25.206586 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:22:25.206618 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:22:25.206631 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:22:25.206644 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:22:25.206658 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:22:25.206674 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:22:25.206687 kernel: LSM: Security Framework initializing Dec 13 02:22:25.206700 kernel: SELinux: Initializing. Dec 13 02:22:25.206804 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:22:25.206825 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:22:25.206841 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:22:25.206885 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:22:25.206900 kernel: signal: max sigframe size: 3632 Dec 13 02:22:25.206915 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:22:25.206929 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:22:25.206948 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:22:25.206962 kernel: x86: Booting SMP configuration: Dec 13 02:22:25.206977 kernel: .... node #0, CPUs: #1 Dec 13 02:22:25.206991 kernel: kvm-clock: cpu 1, msr 6619b041, secondary cpu clock Dec 13 02:22:25.207006 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:22:25.207021 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:22:25.207037 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:22:25.207052 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:22:25.207066 kernel: smpboot: Max logical packages: 1 Dec 13 02:22:25.207083 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 02:22:25.207098 kernel: devtmpfs: initialized Dec 13 02:22:25.207113 kernel: x86/mm: Memory block size: 128MB Dec 13 02:22:25.207128 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:22:25.207142 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:22:25.207157 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:22:25.207170 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:22:25.207185 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:22:25.207199 kernel: audit: type=2000 audit(1734056544.675:1): state=initialized audit_enabled=0 res=1 Dec 13 02:22:25.207216 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:22:25.207231 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:22:25.207246 kernel: cpuidle: using governor menu Dec 13 02:22:25.207260 kernel: ACPI: bus type PCI registered Dec 13 02:22:25.207275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:22:25.207289 kernel: dca service started, version 1.12.1 Dec 13 02:22:25.207304 kernel: PCI: Using configuration type 1 for base access Dec 13 02:22:25.207319 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:22:25.207334 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:22:25.207351 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:22:25.207366 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:22:25.207381 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:22:25.207396 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:22:25.207411 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:22:25.207425 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:22:25.207440 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:22:25.207455 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:22:25.207469 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:22:25.207486 kernel: ACPI: Interpreter enabled Dec 13 02:22:25.207501 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:22:25.207515 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:22:25.207530 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:22:25.207544 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:22:25.207558 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:22:25.207801 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:22:25.207983 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:22:25.208009 kernel: acpiphp: Slot [3] registered Dec 13 02:22:25.208023 kernel: acpiphp: Slot [4] registered Dec 13 02:22:25.208037 kernel: acpiphp: Slot [5] registered Dec 13 02:22:25.208051 kernel: acpiphp: Slot [6] registered Dec 13 02:22:25.208064 kernel: acpiphp: Slot [7] registered Dec 13 02:22:25.208077 kernel: acpiphp: Slot [8] registered Dec 13 02:22:25.208090 kernel: acpiphp: Slot [9] registered Dec 13 02:22:25.208103 kernel: acpiphp: Slot [10] registered Dec 13 02:22:25.208116 kernel: acpiphp: Slot [11] registered Dec 13 02:22:25.208133 kernel: acpiphp: Slot [12] registered Dec 13 02:22:25.208146 kernel: acpiphp: Slot [13] registered Dec 13 02:22:25.208161 kernel: acpiphp: Slot [14] registered Dec 13 02:22:25.208172 kernel: acpiphp: Slot [15] registered Dec 13 02:22:25.208187 kernel: acpiphp: Slot [16] registered Dec 13 02:22:25.208201 kernel: acpiphp: Slot [17] registered Dec 13 02:22:25.208216 kernel: acpiphp: Slot [18] registered Dec 13 02:22:25.208230 kernel: acpiphp: Slot [19] registered Dec 13 02:22:25.208243 kernel: acpiphp: Slot [20] registered Dec 13 02:22:25.208260 kernel: acpiphp: Slot [21] registered Dec 13 02:22:25.208275 kernel: acpiphp: Slot [22] registered Dec 13 02:22:25.208289 kernel: acpiphp: Slot [23] registered Dec 13 02:22:25.208302 kernel: acpiphp: Slot [24] registered Dec 13 02:22:25.208315 kernel: acpiphp: Slot [25] registered Dec 13 02:22:25.208326 kernel: acpiphp: Slot [26] registered Dec 13 02:22:25.208338 kernel: acpiphp: Slot [27] registered Dec 13 02:22:25.208350 kernel: acpiphp: Slot [28] registered Dec 13 02:22:25.208363 kernel: acpiphp: Slot [29] registered Dec 13 02:22:25.208375 kernel: acpiphp: Slot [30] registered Dec 13 02:22:25.208390 kernel: acpiphp: Slot [31] registered Dec 13 02:22:25.208403 kernel: PCI host bridge to bus 0000:00 Dec 13 02:22:25.208532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:22:25.208882 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:22:25.209082 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:22:25.209198 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:22:25.209311 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:22:25.209457 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:22:25.209596 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:22:25.209746 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:22:25.209878 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:22:25.210005 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:22:25.210130 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:22:25.210254 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:22:25.210384 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:22:25.210509 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:22:25.210648 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:22:25.210955 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:22:25.211090 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 10742 usecs Dec 13 02:22:25.211240 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:22:25.211369 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:22:25.211499 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:22:25.212948 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:22:25.213329 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:22:25.213457 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:22:25.213582 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:22:25.213722 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:22:25.213743 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:22:25.213756 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:22:25.213768 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:22:25.213781 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:22:25.213793 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:22:25.213806 kernel: iommu: Default domain type: Translated Dec 13 02:22:25.213818 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:22:25.213939 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:22:25.214053 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:22:25.214170 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:22:25.214186 kernel: vgaarb: loaded Dec 13 02:22:25.214199 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:22:25.214212 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:22:25.214224 kernel: PTP clock support registered Dec 13 02:22:25.214237 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:22:25.214249 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:22:25.214262 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:22:25.214277 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:22:25.214289 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:22:25.214301 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:22:25.214314 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:22:25.214326 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:22:25.214338 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:22:25.214350 kernel: pnp: PnP ACPI init Dec 13 02:22:25.214363 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:22:25.214376 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:22:25.214391 kernel: NET: Registered PF_INET protocol family Dec 13 02:22:25.214404 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:22:25.214418 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:22:25.214431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:22:25.214445 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:22:25.214459 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:22:25.214474 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:22:25.214489 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:22:25.214504 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:22:25.214522 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:22:25.214548 kernel: NET: Registered PF_XDP protocol family Dec 13 02:22:25.214706 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:22:25.214983 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:22:25.215104 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:22:25.215218 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:22:25.215350 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:22:25.215481 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:22:25.215506 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:22:25.215522 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:22:25.215538 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 02:22:25.215554 kernel: clocksource: Switched to clocksource tsc Dec 13 02:22:25.215569 kernel: Initialise system trusted keyrings Dec 13 02:22:25.215584 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:22:25.215612 kernel: Key type asymmetric registered Dec 13 02:22:25.215627 kernel: Asymmetric key parser 'x509' registered Dec 13 02:22:25.215646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:22:25.215661 kernel: io scheduler mq-deadline registered Dec 13 02:22:25.215676 kernel: io scheduler kyber registered Dec 13 02:22:25.215691 kernel: io scheduler bfq registered Dec 13 02:22:25.215706 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:22:25.215722 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:22:25.215737 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:22:25.215753 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:22:25.215768 kernel: i8042: Warning: Keylock active Dec 13 02:22:25.215785 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:22:25.215800 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:22:25.215973 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:22:25.216159 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:22:25.216279 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:22:24 UTC (1734056544) Dec 13 02:22:25.216394 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:22:25.216415 kernel: intel_pstate: CPU model not supported Dec 13 02:22:25.216430 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:22:25.216450 kernel: Segment Routing with IPv6 Dec 13 02:22:25.216464 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:22:25.216479 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:22:25.216495 kernel: Key type dns_resolver registered Dec 13 02:22:25.216509 kernel: IPI shorthand broadcast: enabled Dec 13 02:22:25.216525 kernel: sched_clock: Marking stable (442258750, 241232424)->(788429206, -104938032) Dec 13 02:22:25.216540 kernel: registered taskstats version 1 Dec 13 02:22:25.216555 kernel: Loading compiled-in X.509 certificates Dec 13 02:22:25.216571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:22:25.216588 kernel: Key type .fscrypt registered Dec 13 02:22:25.216614 kernel: Key type fscrypt-provisioning registered Dec 13 02:22:25.216630 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:22:25.216645 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:22:25.216661 kernel: ima: No architecture policies found Dec 13 02:22:25.216676 kernel: clk: Disabling unused clocks Dec 13 02:22:25.216691 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:22:25.216707 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:22:25.216824 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:22:25.216846 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:22:25.216907 kernel: Run /init as init process Dec 13 02:22:25.216924 kernel: with arguments: Dec 13 02:22:25.216940 kernel: /init Dec 13 02:22:25.216954 kernel: with environment: Dec 13 02:22:25.216969 kernel: HOME=/ Dec 13 02:22:25.216983 kernel: TERM=linux Dec 13 02:22:25.216998 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:22:25.217016 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:22:25.217111 systemd[1]: Detected virtualization amazon. Dec 13 02:22:25.217128 systemd[1]: Detected architecture x86-64. Dec 13 02:22:25.217145 systemd[1]: Running in initrd. Dec 13 02:22:25.217176 systemd[1]: No hostname configured, using default hostname. Dec 13 02:22:25.217195 systemd[1]: Hostname set to . Dec 13 02:22:25.217215 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:22:25.217231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:22:25.217247 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:22:25.217262 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:22:25.217278 systemd[1]: Reached target cryptsetup.target. Dec 13 02:22:25.217294 systemd[1]: Reached target paths.target. Dec 13 02:22:25.217310 systemd[1]: Reached target slices.target. Dec 13 02:22:25.217327 systemd[1]: Reached target swap.target. Dec 13 02:22:25.217346 systemd[1]: Reached target timers.target. Dec 13 02:22:25.217365 systemd[1]: Listening on iscsid.socket. Dec 13 02:22:25.217382 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:22:25.217399 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:22:25.217415 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:22:25.217432 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:22:25.217446 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:22:25.217462 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:22:25.217482 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:22:25.217501 systemd[1]: Reached target sockets.target. Dec 13 02:22:25.217517 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:22:25.217533 systemd[1]: Finished network-cleanup.service. Dec 13 02:22:25.217550 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:22:25.217567 systemd[1]: Starting systemd-journald.service... Dec 13 02:22:25.217583 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:22:25.217611 systemd[1]: Starting systemd-resolved.service... Dec 13 02:22:25.217671 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:22:25.217695 systemd-journald[185]: Journal started Dec 13 02:22:25.217767 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2d879a6b1ad4227fcf04d840ab8231) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:22:25.227622 systemd[1]: Started systemd-journald.service. Dec 13 02:22:25.229888 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:22:25.409825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:22:25.409875 kernel: Bridge firewalling registered Dec 13 02:22:25.409899 kernel: SCSI subsystem initialized Dec 13 02:22:25.409919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:22:25.409939 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:22:25.409955 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:22:25.409975 kernel: audit: type=1130 audit(1734056545.400:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.409991 kernel: audit: type=1130 audit(1734056545.405:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.286656 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:22:25.415370 kernel: audit: type=1130 audit(1734056545.410:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.301675 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:22:25.301692 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:22:25.422391 kernel: audit: type=1130 audit(1734056545.416:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.301750 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:22:25.310298 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:22:25.435315 kernel: audit: type=1130 audit(1734056545.429:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.332518 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:22:25.443888 kernel: audit: type=1130 audit(1734056545.436:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.401415 systemd[1]: Started systemd-resolved.service. Dec 13 02:22:25.405972 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:22:25.415615 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:22:25.422515 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:22:25.430750 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:22:25.437411 systemd[1]: Reached target nss-lookup.target. Dec 13 02:22:25.445875 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:22:25.447416 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:22:25.449368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:22:25.480699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:22:25.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.486631 kernel: audit: type=1130 audit(1734056545.480:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.488754 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:22:25.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.493619 kernel: audit: type=1130 audit(1734056545.488:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.499540 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:22:25.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.502181 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:22:25.507741 kernel: audit: type=1130 audit(1734056545.500:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.522657 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:22:25.526259 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:22:25.638712 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:22:25.670868 kernel: iscsi: registered transport (tcp) Dec 13 02:22:25.702647 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:22:25.702714 kernel: QLogic iSCSI HBA Driver Dec 13 02:22:25.752020 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:22:25.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:25.754686 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:22:25.816675 kernel: raid6: avx512x4 gen() 12246 MB/s Dec 13 02:22:25.833660 kernel: raid6: avx512x4 xor() 5800 MB/s Dec 13 02:22:25.851635 kernel: raid6: avx512x2 gen() 12902 MB/s Dec 13 02:22:25.868632 kernel: raid6: avx512x2 xor() 16979 MB/s Dec 13 02:22:25.886674 kernel: raid6: avx512x1 gen() 15307 MB/s Dec 13 02:22:25.904634 kernel: raid6: avx512x1 xor() 11866 MB/s Dec 13 02:22:25.923632 kernel: raid6: avx2x4 gen() 3146 MB/s Dec 13 02:22:25.944642 kernel: raid6: avx2x4 xor() 1986 MB/s Dec 13 02:22:25.961640 kernel: raid6: avx2x2 gen() 9209 MB/s Dec 13 02:22:25.980000 kernel: raid6: avx2x2 xor() 6654 MB/s Dec 13 02:22:25.998724 kernel: raid6: avx2x1 gen() 3793 MB/s Dec 13 02:22:26.019634 kernel: raid6: avx2x1 xor() 477 MB/s Dec 13 02:22:26.036635 kernel: raid6: sse2x4 gen() 4517 MB/s Dec 13 02:22:26.055698 kernel: raid6: sse2x4 xor() 2972 MB/s Dec 13 02:22:26.072629 kernel: raid6: sse2x2 gen() 4768 MB/s Dec 13 02:22:26.089633 kernel: raid6: sse2x2 xor() 3116 MB/s Dec 13 02:22:26.109631 kernel: raid6: sse2x1 gen() 2978 MB/s Dec 13 02:22:26.132449 kernel: raid6: sse2x1 xor() 2093 MB/s Dec 13 02:22:26.132524 kernel: raid6: using algorithm avx512x1 gen() 15307 MB/s Dec 13 02:22:26.132553 kernel: raid6: .... xor() 11866 MB/s, rmw enabled Dec 13 02:22:26.133205 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:22:26.164635 kernel: xor: automatically using best checksumming function avx Dec 13 02:22:26.289631 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:22:26.303523 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:22:26.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:26.305000 audit: BPF prog-id=7 op=LOAD Dec 13 02:22:26.305000 audit: BPF prog-id=8 op=LOAD Dec 13 02:22:26.306369 systemd[1]: Starting systemd-udevd.service... Dec 13 02:22:26.324370 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 02:22:26.331617 systemd[1]: Started systemd-udevd.service. Dec 13 02:22:26.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:26.333996 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:22:26.384375 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Dec 13 02:22:26.427781 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:22:26.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:26.429752 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:22:26.496981 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:22:26.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:26.565632 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:22:26.574527 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:22:26.596712 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:22:26.596877 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:22:26.597002 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:22:26.597019 kernel: AES CTR mode by8 optimization enabled Dec 13 02:22:26.597035 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:c0:a0:a1:ae:f9 Dec 13 02:22:26.598836 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:22:26.756756 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:22:26.757004 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:22:26.757027 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:22:26.757176 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:22:26.757195 kernel: GPT:9289727 != 16777215 Dec 13 02:22:26.757213 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:22:26.757230 kernel: GPT:9289727 != 16777215 Dec 13 02:22:26.757252 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:22:26.757269 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:22:26.757286 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (437) Dec 13 02:22:26.792344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:22:26.805821 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:22:26.821313 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:22:26.827299 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:22:26.828724 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:22:26.832757 systemd[1]: Starting disk-uuid.service... Dec 13 02:22:26.842702 disk-uuid[593]: Primary Header is updated. Dec 13 02:22:26.842702 disk-uuid[593]: Secondary Entries is updated. Dec 13 02:22:26.842702 disk-uuid[593]: Secondary Header is updated. Dec 13 02:22:26.851785 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:22:26.859718 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:22:26.866678 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:22:27.879518 disk-uuid[594]: The operation has completed successfully. Dec 13 02:22:27.887315 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:22:28.085434 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:22:28.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.085546 systemd[1]: Finished disk-uuid.service. Dec 13 02:22:28.120987 systemd[1]: Starting verity-setup.service... Dec 13 02:22:28.164777 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:22:28.315648 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:22:28.319967 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:22:28.325728 systemd[1]: Finished verity-setup.service. Dec 13 02:22:28.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.453619 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:22:28.454298 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:22:28.456754 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:22:28.458113 systemd[1]: Starting ignition-setup.service... Dec 13 02:22:28.484027 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:22:28.512693 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:22:28.512847 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:22:28.512869 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:22:28.522629 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:22:28.538229 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:22:28.547023 systemd[1]: Finished ignition-setup.service. Dec 13 02:22:28.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.549279 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:22:28.585197 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:22:28.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.587000 audit: BPF prog-id=9 op=LOAD Dec 13 02:22:28.589247 systemd[1]: Starting systemd-networkd.service... Dec 13 02:22:28.614022 systemd-networkd[1105]: lo: Link UP Dec 13 02:22:28.614333 systemd-networkd[1105]: lo: Gained carrier Dec 13 02:22:28.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.617653 systemd-networkd[1105]: Enumeration completed Dec 13 02:22:28.617761 systemd[1]: Started systemd-networkd.service. Dec 13 02:22:28.618623 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:22:28.620690 systemd[1]: Reached target network.target. Dec 13 02:22:28.625364 systemd[1]: Starting iscsiuio.service... Dec 13 02:22:28.628398 systemd-networkd[1105]: eth0: Link UP Dec 13 02:22:28.628405 systemd-networkd[1105]: eth0: Gained carrier Dec 13 02:22:28.648116 systemd[1]: Started iscsiuio.service. Dec 13 02:22:28.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.650220 systemd[1]: Starting iscsid.service... Dec 13 02:22:28.659326 iscsid[1110]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:22:28.659326 iscsid[1110]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:22:28.659326 iscsid[1110]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:22:28.659326 iscsid[1110]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:22:28.659326 iscsid[1110]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:22:28.659326 iscsid[1110]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:22:28.669017 systemd[1]: Started iscsid.service. Dec 13 02:22:28.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.680160 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:22:28.687783 systemd-networkd[1105]: eth0: DHCPv4 address 172.31.24.110/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:22:28.714521 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:22:28.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:28.724910 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:22:28.727826 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:22:28.728893 systemd[1]: Reached target remote-fs.target. Dec 13 02:22:28.735454 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:22:28.769958 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:22:28.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.010220 ignition[1061]: Ignition 2.14.0 Dec 13 02:22:29.010237 ignition[1061]: Stage: fetch-offline Dec 13 02:22:29.010485 ignition[1061]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:29.010530 ignition[1061]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:29.048140 ignition[1061]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:29.049690 ignition[1061]: Ignition finished successfully Dec 13 02:22:29.051889 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:22:29.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.055019 systemd[1]: Starting ignition-fetch.service... Dec 13 02:22:29.065257 ignition[1129]: Ignition 2.14.0 Dec 13 02:22:29.065270 ignition[1129]: Stage: fetch Dec 13 02:22:29.065474 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:29.065507 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:29.074632 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:29.075960 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:29.110065 ignition[1129]: INFO : PUT result: OK Dec 13 02:22:29.112843 ignition[1129]: DEBUG : parsed url from cmdline: "" Dec 13 02:22:29.112843 ignition[1129]: INFO : no config URL provided Dec 13 02:22:29.112843 ignition[1129]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:22:29.112843 ignition[1129]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:22:29.123866 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:29.123866 ignition[1129]: INFO : PUT result: OK Dec 13 02:22:29.123866 ignition[1129]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:22:29.132838 ignition[1129]: INFO : GET result: OK Dec 13 02:22:29.132838 ignition[1129]: DEBUG : parsing config with SHA512: fb49a86defad586d16cb638fd6998a8acb921b5006f5077d9f127b6ad838d56f2e4135dc8a77fd0077141278921dd6d419d96d354d05ef5504ff84239a64e8f9 Dec 13 02:22:29.151786 unknown[1129]: fetched base config from "system" Dec 13 02:22:29.151802 unknown[1129]: fetched base config from "system" Dec 13 02:22:29.152494 ignition[1129]: fetch: fetch complete Dec 13 02:22:29.151810 unknown[1129]: fetched user config from "aws" Dec 13 02:22:29.152501 ignition[1129]: fetch: fetch passed Dec 13 02:22:29.152556 ignition[1129]: Ignition finished successfully Dec 13 02:22:29.156912 systemd[1]: Finished ignition-fetch.service. Dec 13 02:22:29.164772 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 02:22:29.164809 kernel: audit: type=1130 audit(1734056549.158:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.165352 systemd[1]: Starting ignition-kargs.service... Dec 13 02:22:29.178680 ignition[1135]: Ignition 2.14.0 Dec 13 02:22:29.178693 ignition[1135]: Stage: kargs Dec 13 02:22:29.178894 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:29.178925 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:29.191132 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:29.192870 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:29.194497 ignition[1135]: INFO : PUT result: OK Dec 13 02:22:29.198211 ignition[1135]: kargs: kargs passed Dec 13 02:22:29.198269 ignition[1135]: Ignition finished successfully Dec 13 02:22:29.201058 systemd[1]: Finished ignition-kargs.service. Dec 13 02:22:29.210676 kernel: audit: type=1130 audit(1734056549.202:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.203747 systemd[1]: Starting ignition-disks.service... Dec 13 02:22:29.215393 ignition[1141]: Ignition 2.14.0 Dec 13 02:22:29.215404 ignition[1141]: Stage: disks Dec 13 02:22:29.215558 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:29.215581 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:29.233444 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:29.235349 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:29.237586 ignition[1141]: INFO : PUT result: OK Dec 13 02:22:29.241482 ignition[1141]: disks: disks passed Dec 13 02:22:29.241557 ignition[1141]: Ignition finished successfully Dec 13 02:22:29.243814 systemd[1]: Finished ignition-disks.service. Dec 13 02:22:29.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.246239 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:22:29.254098 kernel: audit: type=1130 audit(1734056549.245:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.254055 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:22:29.257918 systemd[1]: Reached target local-fs.target. Dec 13 02:22:29.260229 systemd[1]: Reached target sysinit.target. Dec 13 02:22:29.262338 systemd[1]: Reached target basic.target. Dec 13 02:22:29.265921 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:22:29.314579 systemd-fsck[1149]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:22:29.318512 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:22:29.333121 kernel: audit: type=1130 audit(1734056549.320:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.321609 systemd[1]: Mounting sysroot.mount... Dec 13 02:22:29.359625 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:22:29.363014 systemd[1]: Mounted sysroot.mount. Dec 13 02:22:29.364735 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:22:29.372097 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:22:29.378653 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:22:29.378868 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:22:29.379016 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:22:29.392751 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:22:29.398918 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:22:29.402163 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:22:29.416486 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:22:29.427437 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:22:29.434569 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1166) Dec 13 02:22:29.434785 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:22:29.434808 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:22:29.434948 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:22:29.442752 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:22:29.451225 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:22:29.454722 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:22:29.461012 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:22:29.559912 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:22:29.569160 kernel: audit: type=1130 audit(1734056549.559:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.561171 systemd[1]: Starting ignition-mount.service... Dec 13 02:22:29.570353 systemd[1]: Starting sysroot-boot.service... Dec 13 02:22:29.578430 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:22:29.578558 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:22:29.599232 ignition[1231]: INFO : Ignition 2.14.0 Dec 13 02:22:29.602646 ignition[1231]: INFO : Stage: mount Dec 13 02:22:29.602646 ignition[1231]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:29.602646 ignition[1231]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:29.621291 ignition[1231]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:29.622748 ignition[1231]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:29.626364 ignition[1231]: INFO : PUT result: OK Dec 13 02:22:29.631952 ignition[1231]: INFO : mount: mount passed Dec 13 02:22:29.633332 ignition[1231]: INFO : Ignition finished successfully Dec 13 02:22:29.633124 systemd[1]: Finished ignition-mount.service. Dec 13 02:22:29.645751 kernel: audit: type=1130 audit(1734056549.637:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.639329 systemd[1]: Starting ignition-files.service... Dec 13 02:22:29.649730 systemd[1]: Finished sysroot-boot.service. Dec 13 02:22:29.656052 kernel: audit: type=1130 audit(1734056549.650:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:29.654285 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:22:29.683752 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1241) Dec 13 02:22:29.689085 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:22:29.689149 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:22:29.689162 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:22:29.703687 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:22:29.708041 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:22:29.721730 ignition[1260]: INFO : Ignition 2.14.0 Dec 13 02:22:29.722876 ignition[1260]: INFO : Stage: files Dec 13 02:22:29.722876 ignition[1260]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:29.722876 ignition[1260]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:29.733986 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:29.736082 ignition[1260]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:29.738184 ignition[1260]: INFO : PUT result: OK Dec 13 02:22:29.743193 ignition[1260]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:22:29.750942 ignition[1260]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:22:29.750942 ignition[1260]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:22:29.773477 ignition[1260]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:22:29.775763 ignition[1260]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:22:29.785924 unknown[1260]: wrote ssh authorized keys file for user: core Dec 13 02:22:29.788786 ignition[1260]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:22:29.788786 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:22:29.788786 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:22:29.788786 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:22:29.788786 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:22:29.807760 ignition[1260]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1669741699" Dec 13 02:22:29.813393 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1260) Dec 13 02:22:29.813433 ignition[1260]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1669741699": device or resource busy Dec 13 02:22:29.813433 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1669741699", trying btrfs: device or resource busy Dec 13 02:22:29.813433 ignition[1260]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1669741699" Dec 13 02:22:29.813433 ignition[1260]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1669741699" Dec 13 02:22:29.823170 ignition[1260]: INFO : op(3): [started] unmounting "/mnt/oem1669741699" Dec 13 02:22:29.824936 ignition[1260]: INFO : op(3): [finished] unmounting "/mnt/oem1669741699" Dec 13 02:22:29.824936 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:22:29.824936 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:22:29.833514 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:22:29.833514 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:22:29.838072 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:22:29.840297 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:22:29.840297 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:22:29.840297 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:22:29.840297 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:22:29.863232 ignition[1260]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368713000" Dec 13 02:22:29.871899 ignition[1260]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368713000": device or resource busy Dec 13 02:22:29.871899 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1368713000", trying btrfs: device or resource busy Dec 13 02:22:29.871899 ignition[1260]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368713000" Dec 13 02:22:29.883898 ignition[1260]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1368713000" Dec 13 02:22:29.883898 ignition[1260]: INFO : op(6): [started] unmounting "/mnt/oem1368713000" Dec 13 02:22:29.888466 ignition[1260]: INFO : op(6): [finished] unmounting "/mnt/oem1368713000" Dec 13 02:22:29.888466 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:22:29.894387 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:22:29.894387 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:22:29.910880 ignition[1260]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2820051816" Dec 13 02:22:29.912729 ignition[1260]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2820051816": device or resource busy Dec 13 02:22:29.912729 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2820051816", trying btrfs: device or resource busy Dec 13 02:22:29.912729 ignition[1260]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2820051816" Dec 13 02:22:29.919540 ignition[1260]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2820051816" Dec 13 02:22:29.919540 ignition[1260]: INFO : op(9): [started] unmounting "/mnt/oem2820051816" Dec 13 02:22:29.919540 ignition[1260]: INFO : op(9): [finished] unmounting "/mnt/oem2820051816" Dec 13 02:22:29.919540 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:22:29.919540 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:22:29.919540 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:22:29.931913 ignition[1260]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3060488063" Dec 13 02:22:29.931913 ignition[1260]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3060488063": device or resource busy Dec 13 02:22:29.931913 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3060488063", trying btrfs: device or resource busy Dec 13 02:22:29.931913 ignition[1260]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3060488063" Dec 13 02:22:29.931913 ignition[1260]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3060488063" Dec 13 02:22:29.946102 ignition[1260]: INFO : op(c): [started] unmounting "/mnt/oem3060488063" Dec 13 02:22:29.946102 ignition[1260]: INFO : op(c): [finished] unmounting "/mnt/oem3060488063" Dec 13 02:22:29.946102 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:22:29.946102 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:22:29.946102 ignition[1260]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:22:30.091831 systemd-networkd[1105]: eth0: Gained IPv6LL Dec 13 02:22:30.307380 ignition[1260]: INFO : GET result: OK Dec 13 02:22:30.755835 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:22:30.755835 ignition[1260]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(f): [started] processing unit "nvidia.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(f): [finished] processing unit "nvidia.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(10): [started] processing unit "containerd.service" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:22:30.760139 ignition[1260]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(10): [finished] processing unit "containerd.service" Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(13): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(13): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(14): [started] setting preset to enabled for "nvidia.service" Dec 13 02:22:30.792547 ignition[1260]: INFO : files: op(14): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:22:30.804294 ignition[1260]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:22:30.804294 ignition[1260]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:22:30.804294 ignition[1260]: INFO : files: files passed Dec 13 02:22:30.804294 ignition[1260]: INFO : Ignition finished successfully Dec 13 02:22:30.806213 systemd[1]: Finished ignition-files.service. Dec 13 02:22:30.817869 kernel: audit: type=1130 audit(1734056550.810:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.822225 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:22:30.823461 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:22:30.824654 systemd[1]: Starting ignition-quench.service... Dec 13 02:22:30.831227 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:22:30.831364 systemd[1]: Finished ignition-quench.service. Dec 13 02:22:30.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.840461 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:22:30.844127 kernel: audit: type=1130 audit(1734056550.834:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.844162 kernel: audit: type=1131 audit(1734056550.834:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.844181 initrd-setup-root-after-ignition[1285]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:22:30.844244 systemd[1]: Reached target ignition-complete.target. Dec 13 02:22:30.845398 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:22:30.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.863525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:22:30.863661 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:22:30.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.866839 systemd[1]: Reached target initrd-fs.target. Dec 13 02:22:30.868395 systemd[1]: Reached target initrd.target. Dec 13 02:22:30.870140 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:22:30.872783 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:22:30.885920 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:22:30.887115 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:22:30.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.903016 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:22:30.905234 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:22:30.907618 systemd[1]: Stopped target timers.target. Dec 13 02:22:30.909991 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:22:30.918213 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:22:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.918792 systemd[1]: Stopped target initrd.target. Dec 13 02:22:30.923612 systemd[1]: Stopped target basic.target. Dec 13 02:22:30.926131 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:22:30.928191 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:22:30.931210 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:22:30.932298 systemd[1]: Stopped target remote-fs.target. Dec 13 02:22:30.935488 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:22:30.938446 systemd[1]: Stopped target sysinit.target. Dec 13 02:22:30.942809 systemd[1]: Stopped target local-fs.target. Dec 13 02:22:30.944760 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:22:30.946510 systemd[1]: Stopped target swap.target. Dec 13 02:22:30.948672 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:22:30.948818 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:22:30.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.951641 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:22:30.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.952705 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:22:30.952842 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:22:30.954747 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:22:30.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.955813 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:22:30.958972 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:22:30.960156 systemd[1]: Stopped ignition-files.service. Dec 13 02:22:30.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:30.976539 systemd[1]: Stopping ignition-mount.service... Dec 13 02:22:30.977965 systemd[1]: Stopping iscsid.service... Dec 13 02:22:30.996830 iscsid[1110]: iscsid shutting down. Dec 13 02:22:30.991725 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:22:31.001298 ignition[1298]: INFO : Ignition 2.14.0 Dec 13 02:22:31.002520 ignition[1298]: INFO : Stage: umount Dec 13 02:22:31.004648 ignition[1298]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:22:31.004648 ignition[1298]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:22:31.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.004244 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:22:31.004793 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:22:31.007283 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:22:31.007508 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:22:31.014148 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:22:31.014284 systemd[1]: Stopped iscsid.service. Dec 13 02:22:31.022809 systemd[1]: Stopping iscsiuio.service... Dec 13 02:22:31.023896 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:22:31.024037 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:22:31.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.029733 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:22:31.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.029859 systemd[1]: Stopped iscsiuio.service. Dec 13 02:22:31.032207 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:22:31.036678 ignition[1298]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:22:31.038267 ignition[1298]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:22:31.040262 ignition[1298]: INFO : PUT result: OK Dec 13 02:22:31.041815 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:22:31.041974 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:22:31.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.045793 ignition[1298]: INFO : umount: umount passed Dec 13 02:22:31.046766 ignition[1298]: INFO : Ignition finished successfully Dec 13 02:22:31.047716 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:22:31.047828 systemd[1]: Stopped ignition-mount.service. Dec 13 02:22:31.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.050595 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:22:31.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.050679 systemd[1]: Stopped ignition-disks.service. Dec 13 02:22:31.051620 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:22:31.051671 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:22:31.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.054175 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:22:31.055155 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:22:31.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.056060 systemd[1]: Stopped target network.target. Dec 13 02:22:31.057683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:22:31.057741 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:22:31.059642 systemd[1]: Stopped target paths.target. Dec 13 02:22:31.059819 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:22:31.061903 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:22:31.062861 systemd[1]: Stopped target slices.target. Dec 13 02:22:31.064660 systemd[1]: Stopped target sockets.target. Dec 13 02:22:31.065554 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:22:31.065596 systemd[1]: Closed iscsid.socket. Dec 13 02:22:31.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.069313 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:22:31.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.070190 systemd[1]: Closed iscsiuio.socket. Dec 13 02:22:31.071642 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:22:31.072420 systemd[1]: Stopped ignition-setup.service. Dec 13 02:22:31.074628 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:22:31.074666 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:22:31.077189 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:22:31.080358 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:22:31.082966 systemd-networkd[1105]: eth0: DHCPv6 lease lost Dec 13 02:22:31.085453 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:22:31.086307 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:22:31.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.089139 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:22:31.089000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:22:31.089173 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:22:31.092725 systemd[1]: Stopping network-cleanup.service... Dec 13 02:22:31.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.093498 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:22:31.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.093550 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:22:31.095440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:22:31.095483 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:22:31.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.097436 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:22:31.099102 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:22:31.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.101893 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:22:31.113773 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:22:31.122765 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:22:31.138000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:22:31.123253 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:22:31.138927 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:22:31.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.139164 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:22:31.142933 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:22:31.142991 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:22:31.145392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:22:31.145441 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:22:31.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.147028 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:22:31.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.147096 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:22:31.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.149381 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:22:31.149442 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:22:31.150962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:22:31.151021 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:22:31.153925 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:22:31.164435 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:22:31.164791 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:22:31.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.168702 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:22:31.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.168773 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:22:31.169768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:22:31.169823 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:22:31.173263 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:22:31.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:31.175153 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:22:31.175303 systemd[1]: Stopped network-cleanup.service. Dec 13 02:22:31.177478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:22:31.177589 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:22:31.180104 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:22:31.183069 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:22:31.202508 systemd[1]: Switching root. Dec 13 02:22:31.206000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:22:31.206000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:22:31.206000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:22:31.207000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:22:31.207000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:22:31.222663 systemd-journald[185]: Journal stopped Dec 13 02:22:36.752457 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:22:36.752547 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:22:36.752569 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:22:36.752589 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:22:36.752801 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:22:36.752826 kernel: SELinux: policy capability open_perms=1 Dec 13 02:22:36.752845 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:22:36.752863 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:22:36.752881 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:22:36.752901 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:22:36.752926 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:22:36.752945 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:22:36.752972 systemd[1]: Successfully loaded SELinux policy in 53.183ms. Dec 13 02:22:36.753010 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.894ms. Dec 13 02:22:36.753034 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:22:36.753056 systemd[1]: Detected virtualization amazon. Dec 13 02:22:36.753075 systemd[1]: Detected architecture x86-64. Dec 13 02:22:36.753092 systemd[1]: Detected first boot. Dec 13 02:22:36.753109 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:22:36.753129 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:22:36.753150 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:22:36.753174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:36.753199 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:36.753221 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:36.753240 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:22:36.753259 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:22:36.753279 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:22:36.753298 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:22:36.753317 systemd[1]: Created slice system-getty.slice. Dec 13 02:22:36.753566 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:22:36.753749 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:22:36.753778 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:22:36.753797 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:22:36.753823 systemd[1]: Created slice user.slice. Dec 13 02:22:36.753854 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:22:36.753873 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:22:36.753891 systemd[1]: Set up automount boot.automount. Dec 13 02:22:36.753910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:22:36.753946 systemd[1]: Reached target integritysetup.target. Dec 13 02:22:36.753964 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:22:36.753983 systemd[1]: Reached target remote-fs.target. Dec 13 02:22:36.754001 systemd[1]: Reached target slices.target. Dec 13 02:22:36.754021 systemd[1]: Reached target swap.target. Dec 13 02:22:36.756327 systemd[1]: Reached target torcx.target. Dec 13 02:22:36.756367 systemd[1]: Reached target veritysetup.target. Dec 13 02:22:36.756385 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:22:36.756411 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:22:36.756430 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 02:22:36.756452 kernel: audit: type=1400 audit(1734056556.467:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:22:36.756476 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:22:36.756494 kernel: audit: type=1335 audit(1734056556.467:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:22:36.756515 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:22:36.756535 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:22:36.756558 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:22:36.756581 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:22:36.756626 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:22:36.756645 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:22:36.756663 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:22:36.756741 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:22:36.756763 systemd[1]: Mounting media.mount... Dec 13 02:22:36.756785 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:36.756805 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:22:36.756826 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:22:36.756843 systemd[1]: Mounting tmp.mount... Dec 13 02:22:36.756861 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:22:36.756879 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:36.756897 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:22:36.756916 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:22:36.756933 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:36.756952 systemd[1]: Starting modprobe@drm.service... Dec 13 02:22:36.756969 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:36.756990 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:22:36.757007 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:36.757027 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:22:36.757046 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:22:36.757065 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:22:36.757083 systemd[1]: Starting systemd-journald.service... Dec 13 02:22:36.757101 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:22:36.757119 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:22:36.757137 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:22:36.757158 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:22:36.757177 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:36.757202 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:22:36.757220 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:22:36.757237 systemd[1]: Mounted media.mount. Dec 13 02:22:36.757255 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:22:36.757273 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:22:36.757291 systemd[1]: Mounted tmp.mount. Dec 13 02:22:36.757321 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:22:36.757343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:36.757362 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:36.757380 kernel: audit: type=1130 audit(1734056556.667:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757399 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:22:36.757421 kernel: audit: type=1130 audit(1734056556.675:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757439 systemd[1]: Finished modprobe@drm.service. Dec 13 02:22:36.757458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:36.757477 kernel: audit: type=1131 audit(1734056556.675:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757494 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:36.757512 kernel: audit: type=1130 audit(1734056556.683:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757529 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:22:36.757547 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:22:36.757571 kernel: audit: type=1131 audit(1734056556.683:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757588 kernel: audit: type=1130 audit(1734056556.699:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757621 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:22:36.757639 systemd[1]: Reached target network-pre.target. Dec 13 02:22:36.757657 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:22:36.757676 kernel: audit: type=1131 audit(1734056556.699:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757697 kernel: audit: type=1130 audit(1734056556.702:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.757715 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:22:36.757737 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:36.757756 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:22:36.757773 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:22:36.757791 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:22:36.757820 systemd-journald[1443]: Journal started Dec 13 02:22:36.757903 systemd-journald[1443]: Runtime Journal (/run/log/journal/ec2d879a6b1ad4227fcf04d840ab8231) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:22:36.467000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:22:36.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.747000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:22:36.747000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffce3a9def0 a2=4000 a3=7ffce3a9df8c items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:22:36.747000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:22:36.771670 systemd[1]: Started systemd-journald.service. Dec 13 02:22:36.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.761440 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:22:36.766993 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:22:36.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.778191 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:22:36.778436 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:22:36.782068 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:22:36.789933 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:22:36.800820 systemd-journald[1443]: Time spent on flushing to /var/log/journal/ec2d879a6b1ad4227fcf04d840ab8231 is 110.038ms for 1106 entries. Dec 13 02:22:36.800820 systemd-journald[1443]: System Journal (/var/log/journal/ec2d879a6b1ad4227fcf04d840ab8231) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:22:36.917732 systemd-journald[1443]: Received client request to flush runtime journal. Dec 13 02:22:36.917797 kernel: loop: module loaded Dec 13 02:22:36.917826 kernel: fuse: init (API version 7.34) Dec 13 02:22:36.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.827303 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:36.827534 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:36.828664 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:22:36.829988 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:22:36.846852 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:22:36.847208 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:22:36.872986 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:22:36.880306 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:22:36.919061 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:22:36.936031 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:22:36.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.937623 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:22:36.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:36.940356 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:22:36.943003 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:22:36.972201 udevadm[1498]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:22:37.014948 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:22:37.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:37.018065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:22:37.115915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:22:37.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:37.875442 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:22:37.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:37.879441 systemd[1]: Starting systemd-udevd.service... Dec 13 02:22:37.918756 systemd-udevd[1505]: Using default interface naming scheme 'v252'. Dec 13 02:22:37.971994 systemd[1]: Started systemd-udevd.service. Dec 13 02:22:37.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:37.974759 systemd[1]: Starting systemd-networkd.service... Dec 13 02:22:38.013256 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:22:38.122378 systemd[1]: Started systemd-userdbd.service. Dec 13 02:22:38.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:38.124009 (udev-worker)[1519]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:22:38.133662 systemd[1]: Found device dev-ttyS0.device. Dec 13 02:22:38.232625 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:22:38.252007 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:22:38.252121 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 02:22:38.296682 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:22:38.331777 systemd-networkd[1511]: lo: Link UP Dec 13 02:22:38.331788 systemd-networkd[1511]: lo: Gained carrier Dec 13 02:22:38.332357 systemd-networkd[1511]: Enumeration completed Dec 13 02:22:38.332519 systemd[1]: Started systemd-networkd.service. Dec 13 02:22:38.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:38.335364 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:22:38.336750 systemd-networkd[1511]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:22:38.341247 systemd-networkd[1511]: eth0: Link UP Dec 13 02:22:38.341625 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:22:38.341792 systemd-networkd[1511]: eth0: Gained carrier Dec 13 02:22:38.351797 systemd-networkd[1511]: eth0: DHCPv4 address 172.31.24.110/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:22:38.355000 audit[1515]: AVC avc: denied { confidentiality } for pid=1515 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:22:38.355000 audit[1515]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d42a7bae40 a1=337fc a2=7ff337a53bc5 a3=5 items=110 ppid=1505 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:22:38.355000 audit: CWD cwd="/" Dec 13 02:22:38.355000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=1 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=2 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=3 name=(null) inode=14051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=4 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=5 name=(null) inode=14052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=6 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=7 name=(null) inode=14053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=8 name=(null) inode=14053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=9 name=(null) inode=14054 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=10 name=(null) inode=14053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=11 name=(null) inode=14055 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=12 name=(null) inode=14053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=13 name=(null) inode=14056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=14 name=(null) inode=14053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=15 name=(null) inode=14057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=16 name=(null) inode=14053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=17 name=(null) inode=14058 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=18 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=19 name=(null) inode=14059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=20 name=(null) inode=14059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=21 name=(null) inode=14060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=22 name=(null) inode=14059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=23 name=(null) inode=14061 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=24 name=(null) inode=14059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=25 name=(null) inode=14062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=26 name=(null) inode=14059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=27 name=(null) inode=14063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=28 name=(null) inode=14059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=29 name=(null) inode=14064 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=30 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=31 name=(null) inode=14065 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=32 name=(null) inode=14065 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=33 name=(null) inode=14066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=34 name=(null) inode=14065 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=35 name=(null) inode=14067 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=36 name=(null) inode=14065 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=37 name=(null) inode=14068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=38 name=(null) inode=14065 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=39 name=(null) inode=14069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=40 name=(null) inode=14065 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=41 name=(null) inode=14070 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=42 name=(null) inode=14050 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=43 name=(null) inode=14071 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=44 name=(null) inode=14071 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=45 name=(null) inode=14072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=46 name=(null) inode=14071 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=47 name=(null) inode=14073 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=48 name=(null) inode=14071 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=49 name=(null) inode=14074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=50 name=(null) inode=14071 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=51 name=(null) inode=14075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=52 name=(null) inode=14071 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=53 name=(null) inode=14076 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=55 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=56 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=57 name=(null) inode=14078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=58 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=59 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=60 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=61 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=62 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=63 name=(null) inode=14081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=64 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=65 name=(null) inode=14082 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=66 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=67 name=(null) inode=14083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=68 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=69 name=(null) inode=14084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=70 name=(null) inode=14080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=71 name=(null) inode=14085 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=72 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=73 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=74 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=75 name=(null) inode=14087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=76 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=77 name=(null) inode=14088 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=78 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=79 name=(null) inode=14089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=80 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=81 name=(null) inode=14090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=82 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=83 name=(null) inode=14091 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=84 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=85 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=86 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=87 name=(null) inode=14093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=88 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=89 name=(null) inode=14094 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=90 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=91 name=(null) inode=14095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=92 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=93 name=(null) inode=14096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=94 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=95 name=(null) inode=14097 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=96 name=(null) inode=14077 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=97 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=98 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=99 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=100 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=101 name=(null) inode=14100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=102 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=103 name=(null) inode=14101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=104 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=105 name=(null) inode=14102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=106 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=107 name=(null) inode=14103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PATH item=109 name=(null) inode=14104 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:22:38.355000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:22:38.382619 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:22:38.400474 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 02:22:38.410632 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:22:38.426644 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1514) Dec 13 02:22:38.552093 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:22:38.612179 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:22:38.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:38.614844 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:22:38.666374 lvm[1620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:22:38.696261 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:22:38.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:38.698129 systemd[1]: Reached target cryptsetup.target. Dec 13 02:22:38.701306 systemd[1]: Starting lvm2-activation.service... Dec 13 02:22:38.710086 lvm[1622]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:22:38.739211 systemd[1]: Finished lvm2-activation.service. Dec 13 02:22:38.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:38.740525 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:22:38.741581 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:22:38.741650 systemd[1]: Reached target local-fs.target. Dec 13 02:22:38.742850 systemd[1]: Reached target machines.target. Dec 13 02:22:38.750065 systemd[1]: Starting ldconfig.service... Dec 13 02:22:38.754374 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:38.754640 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:38.756628 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:22:38.761716 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:22:38.772216 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:22:38.778283 systemd[1]: Starting systemd-sysext.service... Dec 13 02:22:38.789439 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1625 (bootctl) Dec 13 02:22:38.791848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:22:38.817188 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:22:38.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:38.832929 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:22:38.842867 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:22:38.843215 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:22:38.873374 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:22:39.015622 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:22:39.031954 systemd-fsck[1638]: fsck.fat 4.2 (2021-01-31) Dec 13 02:22:39.031954 systemd-fsck[1638]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:22:39.039358 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:22:39.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.052155 systemd[1]: Mounting boot.mount... Dec 13 02:22:39.091821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:22:39.095534 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:22:39.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.107812 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:22:39.120007 systemd[1]: Mounted boot.mount. Dec 13 02:22:39.160358 (sd-sysext)[1650]: Using extensions 'kubernetes'. Dec 13 02:22:39.166771 (sd-sysext)[1650]: Merged extensions into '/usr'. Dec 13 02:22:39.200231 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:22:39.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.223046 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:39.225216 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:22:39.226668 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:39.231289 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:39.236026 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:39.239699 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:39.242506 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:39.242769 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:39.243176 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:39.252924 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:22:39.264811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:39.265349 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:39.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.269149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:39.269587 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:39.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.271662 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:39.271897 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:39.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.273645 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:39.273867 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:22:39.277311 systemd[1]: Finished systemd-sysext.service. Dec 13 02:22:39.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:39.284233 systemd[1]: Starting ensure-sysext.service... Dec 13 02:22:39.287393 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:22:39.302430 systemd[1]: Reloading. Dec 13 02:22:39.316237 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:22:39.319801 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:22:39.325278 systemd-tmpfiles[1675]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:22:39.683130 /usr/lib/systemd/system-generators/torcx-generator[1694]: time="2024-12-13T02:22:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:39.683167 /usr/lib/systemd/system-generators/torcx-generator[1694]: time="2024-12-13T02:22:39Z" level=info msg="torcx already run" Dec 13 02:22:39.897401 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:39.897425 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:39.925061 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:39.970347 ldconfig[1624]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:22:40.032959 systemd[1]: Finished ldconfig.service. Dec 13 02:22:40.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.036368 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:22:40.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.045008 systemd[1]: Starting audit-rules.service... Dec 13 02:22:40.048044 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:22:40.051335 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:22:40.058215 systemd[1]: Starting systemd-resolved.service... Dec 13 02:22:40.062165 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:22:40.065340 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:22:40.079567 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:22:40.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.082000 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:40.086041 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:40.086459 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.091152 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:40.094233 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:40.101591 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:40.103091 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.103319 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:40.103509 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:40.103670 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:40.105131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:40.105376 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:40.106986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:40.107212 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:40.117431 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:40.123418 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:40.125079 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:40.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.128177 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:40.128994 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.132876 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:22:40.140999 systemd-networkd[1511]: eth0: Gained IPv6LL Dec 13 02:22:40.145000 audit[1762]: SYSTEM_BOOT pid=1762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.153348 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:22:40.155057 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.155747 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:40.156238 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:40.156806 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:40.159421 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:22:40.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.166754 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:40.167484 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.172184 systemd[1]: Starting modprobe@drm.service... Dec 13 02:22:40.175260 systemd[1]: Starting modprobe@loop.service... Dec 13 02:22:40.176484 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.176831 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:40.177047 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:22:40.177330 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:22:40.188257 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:22:40.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.190242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:22:40.190468 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:22:40.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.192724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:22:40.193095 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:22:40.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.196320 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:22:40.201414 systemd[1]: Finished ensure-sysext.service. Dec 13 02:22:40.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.205093 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:22:40.205315 systemd[1]: Finished modprobe@drm.service. Dec 13 02:22:40.216269 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:22:40.216651 systemd[1]: Finished modprobe@loop.service. Dec 13 02:22:40.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.218769 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.264758 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:22:40.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.268010 systemd[1]: Starting systemd-update-done.service... Dec 13 02:22:40.287329 systemd[1]: Finished systemd-update-done.service. Dec 13 02:22:40.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:22:40.316000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:22:40.316000 audit[1802]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa81786b0 a2=420 a3=0 items=0 ppid=1756 pid=1802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:22:40.316000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:22:40.316474 augenrules[1802]: No rules Dec 13 02:22:40.319339 systemd[1]: Finished audit-rules.service. Dec 13 02:22:40.465581 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:22:40.467027 systemd[1]: Reached target time-set.target. Dec 13 02:22:40.469182 systemd-resolved[1760]: Positive Trust Anchors: Dec 13 02:22:40.469696 systemd-resolved[1760]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:22:40.469824 systemd-resolved[1760]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:22:40.515958 systemd-resolved[1760]: Defaulting to hostname 'linux'. Dec 13 02:22:40.522145 systemd[1]: Started systemd-resolved.service. Dec 13 02:22:40.526545 systemd[1]: Reached target network.target. Dec 13 02:22:40.527668 systemd[1]: Reached target network-online.target. Dec 13 02:22:40.529486 systemd[1]: Reached target nss-lookup.target. Dec 13 02:22:40.532008 systemd[1]: Reached target sysinit.target. Dec 13 02:22:40.533916 systemd[1]: Started motdgen.path. Dec 13 02:22:40.536284 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:22:40.542144 systemd[1]: Started logrotate.timer. Dec 13 02:22:40.543875 systemd[1]: Started mdadm.timer. Dec 13 02:22:40.546043 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:22:40.547199 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:22:40.547242 systemd[1]: Reached target paths.target. Dec 13 02:22:40.548251 systemd[1]: Reached target timers.target. Dec 13 02:22:40.549911 systemd[1]: Listening on dbus.socket. Dec 13 02:22:40.552707 systemd[1]: Starting docker.socket... Dec 13 02:22:40.561074 systemd[1]: Listening on sshd.socket. Dec 13 02:22:40.563269 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:40.564347 systemd[1]: Listening on docker.socket. Dec 13 02:22:40.565734 systemd[1]: Reached target sockets.target. Dec 13 02:22:40.568856 systemd[1]: Reached target basic.target. Dec 13 02:22:40.571936 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:22:40.572016 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.572048 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:22:40.576578 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:22:40.582657 systemd[1]: Starting containerd.service... Dec 13 02:22:40.600047 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:22:40.610374 systemd[1]: Starting dbus.service... Dec 13 02:22:40.614219 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:22:40.617582 systemd[1]: Starting extend-filesystems.service... Dec 13 02:22:40.625644 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:22:40.630446 systemd[1]: Starting kubelet.service... Dec 13 02:22:40.633765 systemd[1]: Starting motdgen.service... Dec 13 02:22:40.676269 jq[1816]: false Dec 13 02:22:40.654119 systemd[1]: Started nvidia.service. Dec 13 02:22:40.657716 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:22:40.660831 systemd[1]: Starting sshd-keygen.service... Dec 13 02:22:40.665877 systemd[1]: Starting systemd-logind.service... Dec 13 02:22:40.667272 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:22:40.667344 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:22:40.669195 systemd[1]: Starting update-engine.service... Dec 13 02:22:40.672003 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:22:40.676930 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:22:40.677285 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:22:41.573669 systemd-timesyncd[1761]: Contacted time server 12.203.31.102:123 (0.flatcar.pool.ntp.org). Dec 13 02:22:41.573855 systemd-timesyncd[1761]: Initial clock synchronization to Fri 2024-12-13 02:22:41.573239 UTC. Dec 13 02:22:41.573941 systemd-resolved[1760]: Clock change detected. Flushing caches. Dec 13 02:22:41.605795 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:22:41.606139 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:22:41.616675 jq[1830]: true Dec 13 02:22:41.665917 jq[1840]: true Dec 13 02:22:41.831083 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:22:41.831448 systemd[1]: Finished motdgen.service. Dec 13 02:22:41.838422 extend-filesystems[1817]: Found loop1 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p1 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p2 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p3 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found usr Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p4 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p6 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p7 Dec 13 02:22:41.843563 extend-filesystems[1817]: Found nvme0n1p9 Dec 13 02:22:41.843563 extend-filesystems[1817]: Checking size of /dev/nvme0n1p9 Dec 13 02:22:41.869203 dbus-daemon[1814]: [system] SELinux support is enabled Dec 13 02:22:41.876943 systemd[1]: Started dbus.service. Dec 13 02:22:41.882681 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:22:41.882723 systemd[1]: Reached target system-config.target. Dec 13 02:22:41.884144 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:22:41.884171 systemd[1]: Reached target user-config.target. Dec 13 02:22:41.915082 dbus-daemon[1814]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1511 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:22:41.921120 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:22:41.939331 amazon-ssm-agent[1811]: 2024/12/13 02:22:41 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:22:41.943375 amazon-ssm-agent[1811]: Initializing new seelog logger Dec 13 02:22:41.944483 extend-filesystems[1817]: Resized partition /dev/nvme0n1p9 Dec 13 02:22:41.954698 amazon-ssm-agent[1811]: New Seelog Logger Creation Complete Dec 13 02:22:41.954947 amazon-ssm-agent[1811]: 2024/12/13 02:22:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:22:41.955059 amazon-ssm-agent[1811]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:22:41.955727 amazon-ssm-agent[1811]: 2024/12/13 02:22:41 processing appconfig overrides Dec 13 02:22:41.956064 update_engine[1829]: I1213 02:22:41.955178 1829 main.cc:92] Flatcar Update Engine starting Dec 13 02:22:41.957160 extend-filesystems[1884]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:22:41.972077 systemd[1]: Started update-engine.service. Dec 13 02:22:41.973580 update_engine[1829]: I1213 02:22:41.972395 1829 update_check_scheduler.cc:74] Next update check in 4m45s Dec 13 02:22:41.976261 systemd[1]: Started locksmithd.service. Dec 13 02:22:42.006899 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:22:42.129928 env[1835]: time="2024-12-13T02:22:42.129848405Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:22:42.139877 bash[1887]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:22:42.140489 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:22:42.141215 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:22:42.162613 extend-filesystems[1884]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:22:42.162613 extend-filesystems[1884]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:22:42.162613 extend-filesystems[1884]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:22:42.180768 extend-filesystems[1817]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:22:42.162978 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:22:42.163286 systemd[1]: Finished extend-filesystems.service. Dec 13 02:22:42.254972 systemd-logind[1826]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:22:42.255001 systemd-logind[1826]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:22:42.256681 systemd-logind[1826]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:22:42.257481 systemd-logind[1826]: New seat seat0. Dec 13 02:22:42.259688 systemd[1]: Started systemd-logind.service. Dec 13 02:22:42.335150 env[1835]: time="2024-12-13T02:22:42.335067760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:22:42.338851 env[1835]: time="2024-12-13T02:22:42.338805950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:42.357789 env[1835]: time="2024-12-13T02:22:42.357734225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:22:42.357952 env[1835]: time="2024-12-13T02:22:42.357933165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:42.361108 env[1835]: time="2024-12-13T02:22:42.361061285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:22:42.361285 env[1835]: time="2024-12-13T02:22:42.361266474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:42.361400 env[1835]: time="2024-12-13T02:22:42.361379655Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:22:42.361473 env[1835]: time="2024-12-13T02:22:42.361459530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:42.361662 env[1835]: time="2024-12-13T02:22:42.361643878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:42.363500 env[1835]: time="2024-12-13T02:22:42.363472687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:22:42.370779 env[1835]: time="2024-12-13T02:22:42.370713632Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:22:42.371163 env[1835]: time="2024-12-13T02:22:42.371083140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:22:42.371553 env[1835]: time="2024-12-13T02:22:42.371521513Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:22:42.371659 env[1835]: time="2024-12-13T02:22:42.371643473Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:22:42.382183 env[1835]: time="2024-12-13T02:22:42.381898068Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:22:42.382795 env[1835]: time="2024-12-13T02:22:42.382753659Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:22:42.382930 env[1835]: time="2024-12-13T02:22:42.382911445Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:22:42.383531 env[1835]: time="2024-12-13T02:22:42.383508390Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.383733 env[1835]: time="2024-12-13T02:22:42.383713061Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.384068 env[1835]: time="2024-12-13T02:22:42.384045418Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.384237 env[1835]: time="2024-12-13T02:22:42.384219408Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.384338 env[1835]: time="2024-12-13T02:22:42.384322913Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.384453 env[1835]: time="2024-12-13T02:22:42.384436941Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.384794 env[1835]: time="2024-12-13T02:22:42.384531667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.384915 env[1835]: time="2024-12-13T02:22:42.384897632Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.385011 env[1835]: time="2024-12-13T02:22:42.384995521Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:22:42.385559 env[1835]: time="2024-12-13T02:22:42.385539111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:22:42.386301 env[1835]: time="2024-12-13T02:22:42.386183202Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:22:42.388222 env[1835]: time="2024-12-13T02:22:42.388195957Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:22:42.391139 env[1835]: time="2024-12-13T02:22:42.391092583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.391665 env[1835]: time="2024-12-13T02:22:42.391558511Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:22:42.392538 env[1835]: time="2024-12-13T02:22:42.392513076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.393465 env[1835]: time="2024-12-13T02:22:42.393395329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.393919 env[1835]: time="2024-12-13T02:22:42.393896175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.395730 env[1835]: time="2024-12-13T02:22:42.395703782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.395856 env[1835]: time="2024-12-13T02:22:42.395838717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.395957 env[1835]: time="2024-12-13T02:22:42.395941072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.396052 env[1835]: time="2024-12-13T02:22:42.396037163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.396203 env[1835]: time="2024-12-13T02:22:42.396171951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.398113 env[1835]: time="2024-12-13T02:22:42.398086630Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:22:42.401018 env[1835]: time="2024-12-13T02:22:42.400983798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.406712 env[1835]: time="2024-12-13T02:22:42.406586423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.407448 env[1835]: time="2024-12-13T02:22:42.407417661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.408007 env[1835]: time="2024-12-13T02:22:42.407692857Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:22:42.408154 env[1835]: time="2024-12-13T02:22:42.408118417Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:22:42.408872 env[1835]: time="2024-12-13T02:22:42.408849814Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:22:42.409009 env[1835]: time="2024-12-13T02:22:42.408991754Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:22:42.410076 env[1835]: time="2024-12-13T02:22:42.410039462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:22:42.410893 env[1835]: time="2024-12-13T02:22:42.410813581Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:22:42.417664 env[1835]: time="2024-12-13T02:22:42.412835400Z" level=info msg="Connect containerd service" Dec 13 02:22:42.417664 env[1835]: time="2024-12-13T02:22:42.412915607Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:22:42.439729 env[1835]: time="2024-12-13T02:22:42.439620697Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:22:42.440029 env[1835]: time="2024-12-13T02:22:42.440001674Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:22:42.440107 env[1835]: time="2024-12-13T02:22:42.440067725Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:22:42.440270 systemd[1]: Started containerd.service. Dec 13 02:22:42.476556 env[1835]: time="2024-12-13T02:22:42.476512584Z" level=info msg="containerd successfully booted in 0.377617s" Dec 13 02:22:42.477263 env[1835]: time="2024-12-13T02:22:42.476854193Z" level=info msg="Start subscribing containerd event" Dec 13 02:22:42.477572 env[1835]: time="2024-12-13T02:22:42.477487382Z" level=info msg="Start recovering state" Dec 13 02:22:42.477740 env[1835]: time="2024-12-13T02:22:42.477724961Z" level=info msg="Start event monitor" Dec 13 02:22:42.477825 env[1835]: time="2024-12-13T02:22:42.477812972Z" level=info msg="Start snapshots syncer" Dec 13 02:22:42.477891 env[1835]: time="2024-12-13T02:22:42.477879793Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:22:42.477970 env[1835]: time="2024-12-13T02:22:42.477958581Z" level=info msg="Start streaming server" Dec 13 02:22:42.486024 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:22:42.577818 dbus-daemon[1814]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:22:42.578008 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:22:42.578701 dbus-daemon[1814]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1880 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:22:42.584075 systemd[1]: Starting polkit.service... Dec 13 02:22:42.611074 polkitd[1932]: Started polkitd version 121 Dec 13 02:22:42.633955 polkitd[1932]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:22:42.634112 polkitd[1932]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:22:42.637425 polkitd[1932]: Finished loading, compiling and executing 2 rules Dec 13 02:22:42.638041 dbus-daemon[1814]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:22:42.638225 systemd[1]: Started polkit.service. Dec 13 02:22:42.638896 polkitd[1932]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:22:42.664401 systemd-hostnamed[1880]: Hostname set to (transient) Dec 13 02:22:42.664401 systemd-resolved[1760]: System hostname changed to 'ip-172-31-24-110'. Dec 13 02:22:42.930423 coreos-metadata[1813]: Dec 13 02:22:42.930 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:22:42.933496 coreos-metadata[1813]: Dec 13 02:22:42.933 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:22:42.934222 coreos-metadata[1813]: Dec 13 02:22:42.934 INFO Fetch successful Dec 13 02:22:42.934222 coreos-metadata[1813]: Dec 13 02:22:42.934 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:22:42.935149 coreos-metadata[1813]: Dec 13 02:22:42.934 INFO Fetch successful Dec 13 02:22:42.940226 unknown[1813]: wrote ssh authorized keys file for user: core Dec 13 02:22:42.971145 update-ssh-keys[2013]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:22:42.971686 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Create new startup processor Dec 13 02:22:42.971826 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:22:42.990982 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:22:42.991161 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing bookkeeping folders Dec 13 02:22:42.991250 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO removing the completed state files Dec 13 02:22:42.991336 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:22:42.992762 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:22:42.992909 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing healthcheck folders for long running plugins Dec 13 02:22:42.993002 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing locations for inventory plugin Dec 13 02:22:42.993100 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing default location for custom inventory Dec 13 02:22:42.993203 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing default location for file inventory Dec 13 02:22:42.993290 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Initializing default location for role inventory Dec 13 02:22:42.993396 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Init the cloudwatchlogs publisher Dec 13 02:22:42.995511 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:22:42.995649 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:22:42.995741 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:22:42.995824 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:22:42.995928 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:22:42.996014 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:22:42.996099 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:22:42.996183 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:22:42.996268 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:22:42.996366 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:22:42.996462 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:22:42.996544 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO OS: linux, Arch: amd64 Dec 13 02:22:42.998178 amazon-ssm-agent[1811]: datastore file /var/lib/amazon/ssm/i-01a08557cd7832df9/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:22:43.074225 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:22:43.168908 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:22:43.233020 locksmithd[1888]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:22:43.263181 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:22:43.357707 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-01a08557cd7832df9, requestId: dbdea333-984c-48fb-852a-3f400521cd99 Dec 13 02:22:43.453034 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:22:43.547995 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:22:43.554929 sshd_keygen[1860]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:22:43.581957 systemd[1]: Finished sshd-keygen.service. Dec 13 02:22:43.585969 systemd[1]: Starting issuegen.service... Dec 13 02:22:43.595230 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:22:43.595635 systemd[1]: Finished issuegen.service. Dec 13 02:22:43.599751 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:22:43.610543 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:22:43.613893 systemd[1]: Started getty@tty1.service. Dec 13 02:22:43.617234 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:22:43.618546 systemd[1]: Reached target getty.target. Dec 13 02:22:43.642980 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:22:43.738273 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:22:43.834126 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:22:43.930001 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [instanceID=i-01a08557cd7832df9] Starting association polling Dec 13 02:22:44.025955 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:22:44.122206 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:22:44.218454 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:22:44.314961 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:22:44.411676 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:22:44.508547 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [MessageGatewayService] listening reply. Dec 13 02:22:44.606994 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:22:44.665707 systemd[1]: Started kubelet.service. Dec 13 02:22:44.668116 systemd[1]: Reached target multi-user.target. Dec 13 02:22:44.673067 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:22:44.686329 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:22:44.686674 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:22:44.689427 systemd[1]: Startup finished in 7.506s (kernel) + 12.233s (userspace) = 19.740s. Dec 13 02:22:44.704235 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [OfflineService] Starting document processing engine... Dec 13 02:22:44.802646 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:22:44.900537 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:22:44.998560 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [OfflineService] Starting message polling Dec 13 02:22:45.096540 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [OfflineService] Starting send replies to MDS Dec 13 02:22:45.194845 amazon-ssm-agent[1811]: 2024-12-13 02:22:42 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:22:45.293366 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:22:45.392069 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:22:45.490834 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:22:45.590133 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:22:45.689534 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:22:45.789484 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:22:45.889315 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01a08557cd7832df9?role=subscribe&stream=input Dec 13 02:22:45.989778 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01a08557cd7832df9?role=subscribe&stream=input Dec 13 02:22:46.089833 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:22:46.190172 amazon-ssm-agent[1811]: 2024-12-13 02:22:43 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:22:46.358912 kubelet[2055]: E1213 02:22:46.358828 2055 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:22:46.361181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:22:46.361413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:22:50.206536 systemd[1]: Created slice system-sshd.slice. Dec 13 02:22:50.209228 systemd[1]: Started sshd@0-172.31.24.110:22-139.178.68.195:36414.service. Dec 13 02:22:50.425097 sshd[2064]: Accepted publickey for core from 139.178.68.195 port 36414 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:50.428250 sshd[2064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:50.447122 systemd[1]: Created slice user-500.slice. Dec 13 02:22:50.448743 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:22:50.456426 systemd-logind[1826]: New session 1 of user core. Dec 13 02:22:50.468035 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:22:50.470745 systemd[1]: Starting user@500.service... Dec 13 02:22:50.478023 (systemd)[2069]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:50.633234 systemd[2069]: Queued start job for default target default.target. Dec 13 02:22:50.633579 systemd[2069]: Reached target paths.target. Dec 13 02:22:50.633604 systemd[2069]: Reached target sockets.target. Dec 13 02:22:50.633623 systemd[2069]: Reached target timers.target. Dec 13 02:22:50.633641 systemd[2069]: Reached target basic.target. Dec 13 02:22:50.633804 systemd[1]: Started user@500.service. Dec 13 02:22:50.635125 systemd[1]: Started session-1.scope. Dec 13 02:22:50.635703 systemd[2069]: Reached target default.target. Dec 13 02:22:50.635971 systemd[2069]: Startup finished in 114ms. Dec 13 02:22:50.781098 systemd[1]: Started sshd@1-172.31.24.110:22-139.178.68.195:36420.service. Dec 13 02:22:50.943027 sshd[2078]: Accepted publickey for core from 139.178.68.195 port 36420 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:50.944803 sshd[2078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:50.954447 systemd-logind[1826]: New session 2 of user core. Dec 13 02:22:50.955188 systemd[1]: Started session-2.scope. Dec 13 02:22:51.082857 sshd[2078]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:51.087089 systemd[1]: sshd@1-172.31.24.110:22-139.178.68.195:36420.service: Deactivated successfully. Dec 13 02:22:51.091261 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:22:51.094336 systemd-logind[1826]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:22:51.096491 systemd-logind[1826]: Removed session 2. Dec 13 02:22:51.111429 systemd[1]: Started sshd@2-172.31.24.110:22-139.178.68.195:36426.service. Dec 13 02:22:51.278091 sshd[2085]: Accepted publickey for core from 139.178.68.195 port 36426 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:51.279621 sshd[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:51.288131 systemd-logind[1826]: New session 3 of user core. Dec 13 02:22:51.288284 systemd[1]: Started session-3.scope. Dec 13 02:22:51.408133 sshd[2085]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:51.412434 systemd[1]: sshd@2-172.31.24.110:22-139.178.68.195:36426.service: Deactivated successfully. Dec 13 02:22:51.413824 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:22:51.414670 systemd-logind[1826]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:22:51.417456 systemd-logind[1826]: Removed session 3. Dec 13 02:22:51.435486 systemd[1]: Started sshd@3-172.31.24.110:22-139.178.68.195:36428.service. Dec 13 02:22:51.619177 sshd[2092]: Accepted publickey for core from 139.178.68.195 port 36428 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:51.620902 sshd[2092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:51.632395 systemd[1]: Started session-4.scope. Dec 13 02:22:51.632903 systemd-logind[1826]: New session 4 of user core. Dec 13 02:22:51.797160 sshd[2092]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:51.808780 systemd[1]: sshd@3-172.31.24.110:22-139.178.68.195:36428.service: Deactivated successfully. Dec 13 02:22:51.810280 systemd-logind[1826]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:22:51.810394 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:22:51.825676 systemd-logind[1826]: Removed session 4. Dec 13 02:22:51.837053 systemd[1]: Started sshd@4-172.31.24.110:22-139.178.68.195:36434.service. Dec 13 02:22:52.003504 sshd[2099]: Accepted publickey for core from 139.178.68.195 port 36434 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:52.005180 sshd[2099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:52.016452 systemd-logind[1826]: New session 5 of user core. Dec 13 02:22:52.017504 systemd[1]: Started session-5.scope. Dec 13 02:22:52.147669 sudo[2103]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:22:52.150482 sudo[2103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:22:52.186068 systemd[1]: Starting coreos-metadata.service... Dec 13 02:22:52.322722 coreos-metadata[2107]: Dec 13 02:22:52.322 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:22:52.324041 coreos-metadata[2107]: Dec 13 02:22:52.324 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Dec 13 02:22:52.324854 coreos-metadata[2107]: Dec 13 02:22:52.324 INFO Fetch successful Dec 13 02:22:52.324854 coreos-metadata[2107]: Dec 13 02:22:52.324 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Dec 13 02:22:52.325451 coreos-metadata[2107]: Dec 13 02:22:52.325 INFO Fetch successful Dec 13 02:22:52.325587 coreos-metadata[2107]: Dec 13 02:22:52.325 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Dec 13 02:22:52.326138 coreos-metadata[2107]: Dec 13 02:22:52.326 INFO Fetch successful Dec 13 02:22:52.326282 coreos-metadata[2107]: Dec 13 02:22:52.326 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Dec 13 02:22:52.326927 coreos-metadata[2107]: Dec 13 02:22:52.326 INFO Fetch successful Dec 13 02:22:52.327021 coreos-metadata[2107]: Dec 13 02:22:52.326 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Dec 13 02:22:52.327503 coreos-metadata[2107]: Dec 13 02:22:52.327 INFO Fetch successful Dec 13 02:22:52.327572 coreos-metadata[2107]: Dec 13 02:22:52.327 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Dec 13 02:22:52.328144 coreos-metadata[2107]: Dec 13 02:22:52.328 INFO Fetch successful Dec 13 02:22:52.328202 coreos-metadata[2107]: Dec 13 02:22:52.328 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Dec 13 02:22:52.328736 coreos-metadata[2107]: Dec 13 02:22:52.328 INFO Fetch successful Dec 13 02:22:52.328804 coreos-metadata[2107]: Dec 13 02:22:52.328 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Dec 13 02:22:52.329444 coreos-metadata[2107]: Dec 13 02:22:52.329 INFO Fetch successful Dec 13 02:22:52.343105 systemd[1]: Finished coreos-metadata.service. Dec 13 02:22:53.739411 systemd[1]: Stopped kubelet.service. Dec 13 02:22:53.748522 systemd[1]: Starting kubelet.service... Dec 13 02:22:53.817191 systemd[1]: Reloading. Dec 13 02:22:53.966196 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2024-12-13T02:22:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:53.966238 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2024-12-13T02:22:53Z" level=info msg="torcx already run" Dec 13 02:22:54.117322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:54.117363 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:54.141097 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:54.306209 systemd[1]: Started kubelet.service. Dec 13 02:22:54.314472 systemd[1]: Stopping kubelet.service... Dec 13 02:22:54.316976 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:22:54.317670 systemd[1]: Stopped kubelet.service. Dec 13 02:22:54.323120 systemd[1]: Starting kubelet.service... Dec 13 02:22:54.631578 systemd[1]: Started kubelet.service. Dec 13 02:22:54.777521 kubelet[2243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:54.777998 kubelet[2243]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:22:54.777998 kubelet[2243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:54.778116 kubelet[2243]: I1213 02:22:54.778071 2243 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:22:55.201059 kubelet[2243]: I1213 02:22:55.200269 2243 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:22:55.201231 kubelet[2243]: I1213 02:22:55.201070 2243 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:22:55.201814 kubelet[2243]: I1213 02:22:55.201377 2243 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:22:55.302877 kubelet[2243]: I1213 02:22:55.302824 2243 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:22:55.333042 kubelet[2243]: I1213 02:22:55.332997 2243 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:22:55.334054 kubelet[2243]: I1213 02:22:55.334027 2243 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:22:55.334267 kubelet[2243]: I1213 02:22:55.334244 2243 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:22:55.335122 kubelet[2243]: I1213 02:22:55.335084 2243 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:22:55.335122 kubelet[2243]: I1213 02:22:55.335120 2243 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:22:55.335286 kubelet[2243]: I1213 02:22:55.335266 2243 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:55.335425 kubelet[2243]: I1213 02:22:55.335411 2243 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:22:55.335516 kubelet[2243]: I1213 02:22:55.335504 2243 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:22:55.335560 kubelet[2243]: I1213 02:22:55.335544 2243 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:22:55.335599 kubelet[2243]: I1213 02:22:55.335565 2243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:22:55.335971 kubelet[2243]: E1213 02:22:55.335940 2243 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:55.336056 kubelet[2243]: E1213 02:22:55.335994 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:55.340621 kubelet[2243]: I1213 02:22:55.340588 2243 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:22:55.362153 kubelet[2243]: I1213 02:22:55.362100 2243 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:22:55.372525 kubelet[2243]: W1213 02:22:55.372484 2243 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:22:55.375352 kubelet[2243]: I1213 02:22:55.375311 2243 server.go:1256] "Started kubelet" Dec 13 02:22:55.377921 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:22:55.378732 kubelet[2243]: I1213 02:22:55.378092 2243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:22:55.390358 kubelet[2243]: I1213 02:22:55.390123 2243 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:22:55.391408 kubelet[2243]: I1213 02:22:55.391386 2243 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:22:55.393773 kubelet[2243]: I1213 02:22:55.393748 2243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:22:55.394144 kubelet[2243]: I1213 02:22:55.394129 2243 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:22:55.394897 kubelet[2243]: I1213 02:22:55.394880 2243 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:22:55.395583 kubelet[2243]: I1213 02:22:55.395564 2243 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:22:55.395760 kubelet[2243]: I1213 02:22:55.395749 2243 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:22:55.400109 kubelet[2243]: I1213 02:22:55.400084 2243 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:22:55.401046 kubelet[2243]: I1213 02:22:55.401016 2243 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:22:55.402067 kubelet[2243]: E1213 02:22:55.402050 2243 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:22:55.402904 kubelet[2243]: E1213 02:22:55.402889 2243 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.24.110\" not found" node="172.31.24.110" Dec 13 02:22:55.403406 kubelet[2243]: I1213 02:22:55.403391 2243 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:22:55.435287 kubelet[2243]: I1213 02:22:55.435264 2243 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:22:55.435457 kubelet[2243]: I1213 02:22:55.435448 2243 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:22:55.435530 kubelet[2243]: I1213 02:22:55.435523 2243 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:55.438891 kubelet[2243]: I1213 02:22:55.438867 2243 policy_none.go:49] "None policy: Start" Dec 13 02:22:55.439808 kubelet[2243]: I1213 02:22:55.439792 2243 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:22:55.439930 kubelet[2243]: I1213 02:22:55.439922 2243 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:22:55.447252 kubelet[2243]: I1213 02:22:55.447224 2243 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:22:55.447684 kubelet[2243]: I1213 02:22:55.447669 2243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:22:55.470072 kubelet[2243]: E1213 02:22:55.469972 2243 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.24.110\" not found" Dec 13 02:22:55.496857 kubelet[2243]: I1213 02:22:55.496561 2243 kubelet_node_status.go:73] "Attempting to register node" node="172.31.24.110" Dec 13 02:22:55.502791 kubelet[2243]: I1213 02:22:55.502732 2243 kubelet_node_status.go:76] "Successfully registered node" node="172.31.24.110" Dec 13 02:22:55.529882 kubelet[2243]: I1213 02:22:55.529859 2243 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:22:55.530585 env[1835]: time="2024-12-13T02:22:55.530544729Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:22:55.531044 kubelet[2243]: I1213 02:22:55.531029 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:22:55.531219 kubelet[2243]: I1213 02:22:55.531090 2243 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:22:55.532994 kubelet[2243]: I1213 02:22:55.532976 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:22:55.533579 kubelet[2243]: I1213 02:22:55.533564 2243 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:22:55.533773 kubelet[2243]: I1213 02:22:55.533760 2243 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:22:55.533900 kubelet[2243]: E1213 02:22:55.533890 2243 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:22:56.166112 sudo[2103]: pam_unix(sudo:session): session closed for user root Dec 13 02:22:56.195282 sshd[2099]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:56.202912 systemd[1]: sshd@4-172.31.24.110:22-139.178.68.195:36434.service: Deactivated successfully. Dec 13 02:22:56.208634 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:22:56.213415 systemd-logind[1826]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:22:56.215776 kubelet[2243]: I1213 02:22:56.214562 2243 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:22:56.217462 kubelet[2243]: W1213 02:22:56.215889 2243 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:22:56.217462 kubelet[2243]: W1213 02:22:56.215939 2243 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:22:56.220707 kubelet[2243]: W1213 02:22:56.217604 2243 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:22:56.223899 systemd-logind[1826]: Removed session 5. Dec 13 02:22:56.336303 kubelet[2243]: E1213 02:22:56.336252 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:56.336303 kubelet[2243]: I1213 02:22:56.336255 2243 apiserver.go:52] "Watching apiserver" Dec 13 02:22:56.344187 kubelet[2243]: I1213 02:22:56.344147 2243 topology_manager.go:215] "Topology Admit Handler" podUID="59be7261-86e7-4ba3-bf7b-c67b0eefe2df" podNamespace="kube-system" podName="kube-proxy-24rxq" Dec 13 02:22:56.344397 kubelet[2243]: I1213 02:22:56.344264 2243 topology_manager.go:215] "Topology Admit Handler" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" podNamespace="kube-system" podName="cilium-nhgdf" Dec 13 02:22:56.396426 kubelet[2243]: I1213 02:22:56.396381 2243 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:22:56.402377 kubelet[2243]: I1213 02:22:56.402330 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-hostproc\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402531 kubelet[2243]: I1213 02:22:56.402466 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cni-path\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402531 kubelet[2243]: I1213 02:22:56.402529 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-etc-cni-netd\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402700 kubelet[2243]: I1213 02:22:56.402561 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-hubble-tls\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402772 kubelet[2243]: I1213 02:22:56.402704 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfsh\" (UniqueName: \"kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-kube-api-access-6kfsh\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402817 kubelet[2243]: I1213 02:22:56.402794 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpgp\" (UniqueName: \"kubernetes.io/projected/59be7261-86e7-4ba3-bf7b-c67b0eefe2df-kube-api-access-jqpgp\") pod \"kube-proxy-24rxq\" (UID: \"59be7261-86e7-4ba3-bf7b-c67b0eefe2df\") " pod="kube-system/kube-proxy-24rxq" Dec 13 02:22:56.402863 kubelet[2243]: I1213 02:22:56.402857 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-run\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402937 kubelet[2243]: I1213 02:22:56.402918 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-lib-modules\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.402998 kubelet[2243]: I1213 02:22:56.402986 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03534de-984c-4cee-9dea-c0718f3c32f6-clustermesh-secrets\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.403126 kubelet[2243]: I1213 02:22:56.403089 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-config-path\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.403191 kubelet[2243]: I1213 02:22:56.403141 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59be7261-86e7-4ba3-bf7b-c67b0eefe2df-xtables-lock\") pod \"kube-proxy-24rxq\" (UID: \"59be7261-86e7-4ba3-bf7b-c67b0eefe2df\") " pod="kube-system/kube-proxy-24rxq" Dec 13 02:22:56.403191 kubelet[2243]: I1213 02:22:56.403172 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-bpf-maps\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.403362 kubelet[2243]: I1213 02:22:56.403201 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-cgroup\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.403362 kubelet[2243]: I1213 02:22:56.403235 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-net\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.403362 kubelet[2243]: I1213 02:22:56.403266 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59be7261-86e7-4ba3-bf7b-c67b0eefe2df-kube-proxy\") pod \"kube-proxy-24rxq\" (UID: \"59be7261-86e7-4ba3-bf7b-c67b0eefe2df\") " pod="kube-system/kube-proxy-24rxq" Dec 13 02:22:56.403520 kubelet[2243]: I1213 02:22:56.403392 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59be7261-86e7-4ba3-bf7b-c67b0eefe2df-lib-modules\") pod \"kube-proxy-24rxq\" (UID: \"59be7261-86e7-4ba3-bf7b-c67b0eefe2df\") " pod="kube-system/kube-proxy-24rxq" Dec 13 02:22:56.403520 kubelet[2243]: I1213 02:22:56.403424 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-xtables-lock\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.403520 kubelet[2243]: I1213 02:22:56.403460 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-kernel\") pod \"cilium-nhgdf\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " pod="kube-system/cilium-nhgdf" Dec 13 02:22:56.653459 env[1835]: time="2024-12-13T02:22:56.653154311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-24rxq,Uid:59be7261-86e7-4ba3-bf7b-c67b0eefe2df,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:56.653459 env[1835]: time="2024-12-13T02:22:56.653225677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhgdf,Uid:a03534de-984c-4cee-9dea-c0718f3c32f6,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:57.265085 env[1835]: time="2024-12-13T02:22:57.265032553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.269209 env[1835]: time="2024-12-13T02:22:57.269168023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.275177 env[1835]: time="2024-12-13T02:22:57.275011376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.277805 env[1835]: time="2024-12-13T02:22:57.277761586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.280859 env[1835]: time="2024-12-13T02:22:57.280825158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.284355 env[1835]: time="2024-12-13T02:22:57.284275329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.287863 env[1835]: time="2024-12-13T02:22:57.287699100Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.291475 env[1835]: time="2024-12-13T02:22:57.291430659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:57.330606 env[1835]: time="2024-12-13T02:22:57.330541873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:57.330884 env[1835]: time="2024-12-13T02:22:57.330851680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:57.331235 env[1835]: time="2024-12-13T02:22:57.331104326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:57.331605 env[1835]: time="2024-12-13T02:22:57.331539894Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d90ae8cab91dcc3c180aaecf1407e1f79c7d94806918f9565e9ec37043cf6693 pid=2299 runtime=io.containerd.runc.v2 Dec 13 02:22:57.332906 env[1835]: time="2024-12-13T02:22:57.332844652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:57.333002 env[1835]: time="2024-12-13T02:22:57.332905053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:57.333002 env[1835]: time="2024-12-13T02:22:57.332922124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:57.333189 env[1835]: time="2024-12-13T02:22:57.333142144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73 pid=2309 runtime=io.containerd.runc.v2 Dec 13 02:22:57.337178 kubelet[2243]: E1213 02:22:57.337137 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:57.400870 env[1835]: time="2024-12-13T02:22:57.400828122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-24rxq,Uid:59be7261-86e7-4ba3-bf7b-c67b0eefe2df,Namespace:kube-system,Attempt:0,} returns sandbox id \"d90ae8cab91dcc3c180aaecf1407e1f79c7d94806918f9565e9ec37043cf6693\"" Dec 13 02:22:57.404297 env[1835]: time="2024-12-13T02:22:57.404248000Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:22:57.415682 env[1835]: time="2024-12-13T02:22:57.415627993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nhgdf,Uid:a03534de-984c-4cee-9dea-c0718f3c32f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\"" Dec 13 02:22:57.526746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839668566.mount: Deactivated successfully. Dec 13 02:22:58.338329 kubelet[2243]: E1213 02:22:58.338248 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:58.814436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957551889.mount: Deactivated successfully. Dec 13 02:22:59.339460 kubelet[2243]: E1213 02:22:59.339425 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:22:59.611905 env[1835]: time="2024-12-13T02:22:59.611568849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:59.614176 env[1835]: time="2024-12-13T02:22:59.614133082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:59.616419 env[1835]: time="2024-12-13T02:22:59.616234518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:59.618016 env[1835]: time="2024-12-13T02:22:59.617976169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:59.618628 env[1835]: time="2024-12-13T02:22:59.618591733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:22:59.620591 env[1835]: time="2024-12-13T02:22:59.620550989Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:22:59.621884 env[1835]: time="2024-12-13T02:22:59.621850788Z" level=info msg="CreateContainer within sandbox \"d90ae8cab91dcc3c180aaecf1407e1f79c7d94806918f9565e9ec37043cf6693\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:22:59.648563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3648413623.mount: Deactivated successfully. Dec 13 02:22:59.653993 env[1835]: time="2024-12-13T02:22:59.653939117Z" level=info msg="CreateContainer within sandbox \"d90ae8cab91dcc3c180aaecf1407e1f79c7d94806918f9565e9ec37043cf6693\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8aadf33cf76fc7e762c2bdac7994bd6de3b16c961b550eda9a10711b0c9f47fe\"" Dec 13 02:22:59.654938 env[1835]: time="2024-12-13T02:22:59.654895684Z" level=info msg="StartContainer for \"8aadf33cf76fc7e762c2bdac7994bd6de3b16c961b550eda9a10711b0c9f47fe\"" Dec 13 02:22:59.755517 env[1835]: time="2024-12-13T02:22:59.753931200Z" level=info msg="StartContainer for \"8aadf33cf76fc7e762c2bdac7994bd6de3b16c961b550eda9a10711b0c9f47fe\" returns successfully" Dec 13 02:23:00.346739 kubelet[2243]: E1213 02:23:00.345740 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:01.347444 kubelet[2243]: E1213 02:23:01.347335 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:02.355122 kubelet[2243]: E1213 02:23:02.355084 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:03.356284 kubelet[2243]: E1213 02:23:03.356235 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:04.357004 kubelet[2243]: E1213 02:23:04.356904 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:05.357911 kubelet[2243]: E1213 02:23:05.357828 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:06.358816 kubelet[2243]: E1213 02:23:06.358776 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:06.479987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051023427.mount: Deactivated successfully. Dec 13 02:23:07.359751 kubelet[2243]: E1213 02:23:07.359670 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:07.515183 amazon-ssm-agent[1811]: 2024-12-13 02:23:07 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:23:08.360115 kubelet[2243]: E1213 02:23:08.360007 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:09.360581 kubelet[2243]: E1213 02:23:09.360542 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:10.362162 kubelet[2243]: E1213 02:23:10.362082 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:10.634839 env[1835]: time="2024-12-13T02:23:10.634395479Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:10.638971 env[1835]: time="2024-12-13T02:23:10.638925219Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:10.644785 env[1835]: time="2024-12-13T02:23:10.644739676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:10.645195 env[1835]: time="2024-12-13T02:23:10.645109917Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:23:10.651872 env[1835]: time="2024-12-13T02:23:10.651827327Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:23:10.680112 env[1835]: time="2024-12-13T02:23:10.680010360Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\"" Dec 13 02:23:10.681509 env[1835]: time="2024-12-13T02:23:10.681458724Z" level=info msg="StartContainer for \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\"" Dec 13 02:23:10.731690 systemd[1]: run-containerd-runc-k8s.io-2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5-runc.JPw1Fk.mount: Deactivated successfully. Dec 13 02:23:10.789537 env[1835]: time="2024-12-13T02:23:10.788954127Z" level=info msg="StartContainer for \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\" returns successfully" Dec 13 02:23:11.045016 env[1835]: time="2024-12-13T02:23:11.044884866Z" level=info msg="shim disconnected" id=2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5 Dec 13 02:23:11.045016 env[1835]: time="2024-12-13T02:23:11.045015860Z" level=warning msg="cleaning up after shim disconnected" id=2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5 namespace=k8s.io Dec 13 02:23:11.045328 env[1835]: time="2024-12-13T02:23:11.045030275Z" level=info msg="cleaning up dead shim" Dec 13 02:23:11.084671 env[1835]: time="2024-12-13T02:23:11.084602548Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2583 runtime=io.containerd.runc.v2\n" Dec 13 02:23:11.362868 kubelet[2243]: E1213 02:23:11.362456 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:11.611484 env[1835]: time="2024-12-13T02:23:11.611441247Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:23:11.624680 env[1835]: time="2024-12-13T02:23:11.624522723Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\"" Dec 13 02:23:11.625498 env[1835]: time="2024-12-13T02:23:11.625464241Z" level=info msg="StartContainer for \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\"" Dec 13 02:23:11.658383 kubelet[2243]: I1213 02:23:11.657785 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-24rxq" podStartSLOduration=14.442115115 podStartE2EDuration="16.657703617s" podCreationTimestamp="2024-12-13 02:22:55 +0000 UTC" firstStartedPulling="2024-12-13 02:22:57.403653533 +0000 UTC m=+2.737889808" lastFinishedPulling="2024-12-13 02:22:59.619242029 +0000 UTC m=+4.953478310" observedRunningTime="2024-12-13 02:23:00.613752877 +0000 UTC m=+5.947989175" watchObservedRunningTime="2024-12-13 02:23:11.657703617 +0000 UTC m=+16.991939916" Dec 13 02:23:11.665928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5-rootfs.mount: Deactivated successfully. Dec 13 02:23:11.695486 env[1835]: time="2024-12-13T02:23:11.695432715Z" level=info msg="StartContainer for \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\" returns successfully" Dec 13 02:23:11.703290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:23:11.705486 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:23:11.705705 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:23:11.708139 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:23:11.713015 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:23:11.738533 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:23:11.747994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a-rootfs.mount: Deactivated successfully. Dec 13 02:23:11.766577 env[1835]: time="2024-12-13T02:23:11.766520414Z" level=info msg="shim disconnected" id=eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a Dec 13 02:23:11.766989 env[1835]: time="2024-12-13T02:23:11.766956273Z" level=warning msg="cleaning up after shim disconnected" id=eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a namespace=k8s.io Dec 13 02:23:11.766989 env[1835]: time="2024-12-13T02:23:11.766987040Z" level=info msg="cleaning up dead shim" Dec 13 02:23:11.775982 env[1835]: time="2024-12-13T02:23:11.775866790Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2645 runtime=io.containerd.runc.v2\n" Dec 13 02:23:12.363285 kubelet[2243]: E1213 02:23:12.363180 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:12.618011 env[1835]: time="2024-12-13T02:23:12.617473683Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:23:12.636950 env[1835]: time="2024-12-13T02:23:12.636901583Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\"" Dec 13 02:23:12.637789 env[1835]: time="2024-12-13T02:23:12.637753377Z" level=info msg="StartContainer for \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\"" Dec 13 02:23:12.701956 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:23:12.749487 env[1835]: time="2024-12-13T02:23:12.746327544Z" level=info msg="StartContainer for \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\" returns successfully" Dec 13 02:23:12.783704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e-rootfs.mount: Deactivated successfully. Dec 13 02:23:12.790730 env[1835]: time="2024-12-13T02:23:12.790672023Z" level=info msg="shim disconnected" id=6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e Dec 13 02:23:12.790730 env[1835]: time="2024-12-13T02:23:12.790729501Z" level=warning msg="cleaning up after shim disconnected" id=6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e namespace=k8s.io Dec 13 02:23:12.791029 env[1835]: time="2024-12-13T02:23:12.790741498Z" level=info msg="cleaning up dead shim" Dec 13 02:23:12.802156 env[1835]: time="2024-12-13T02:23:12.802013527Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2707 runtime=io.containerd.runc.v2\n" Dec 13 02:23:13.363602 kubelet[2243]: E1213 02:23:13.363546 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:13.626146 env[1835]: time="2024-12-13T02:23:13.625903612Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:23:13.663444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913179012.mount: Deactivated successfully. Dec 13 02:23:13.672395 env[1835]: time="2024-12-13T02:23:13.672201746Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\"" Dec 13 02:23:13.675259 env[1835]: time="2024-12-13T02:23:13.675105717Z" level=info msg="StartContainer for \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\"" Dec 13 02:23:13.737019 systemd[1]: run-containerd-runc-k8s.io-8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7-runc.xWVMPn.mount: Deactivated successfully. Dec 13 02:23:13.779842 env[1835]: time="2024-12-13T02:23:13.779793034Z" level=info msg="StartContainer for \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\" returns successfully" Dec 13 02:23:13.803612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7-rootfs.mount: Deactivated successfully. Dec 13 02:23:13.811568 env[1835]: time="2024-12-13T02:23:13.811466693Z" level=info msg="shim disconnected" id=8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7 Dec 13 02:23:13.811568 env[1835]: time="2024-12-13T02:23:13.811560287Z" level=warning msg="cleaning up after shim disconnected" id=8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7 namespace=k8s.io Dec 13 02:23:13.811568 env[1835]: time="2024-12-13T02:23:13.811574329Z" level=info msg="cleaning up dead shim" Dec 13 02:23:13.822536 env[1835]: time="2024-12-13T02:23:13.822488011Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:23:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2763 runtime=io.containerd.runc.v2\n" Dec 13 02:23:14.364393 kubelet[2243]: E1213 02:23:14.364355 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:14.643965 env[1835]: time="2024-12-13T02:23:14.643698436Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:23:14.687793 env[1835]: time="2024-12-13T02:23:14.687670139Z" level=info msg="CreateContainer within sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\"" Dec 13 02:23:14.692179 env[1835]: time="2024-12-13T02:23:14.692134121Z" level=info msg="StartContainer for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\"" Dec 13 02:23:14.706856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642218509.mount: Deactivated successfully. Dec 13 02:23:14.786657 env[1835]: time="2024-12-13T02:23:14.786613484Z" level=info msg="StartContainer for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" returns successfully" Dec 13 02:23:15.000963 kubelet[2243]: I1213 02:23:15.000931 2243 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:23:15.336038 kubelet[2243]: E1213 02:23:15.335919 2243 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:15.365190 kubelet[2243]: E1213 02:23:15.365150 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:15.592400 kernel: Initializing XFRM netlink socket Dec 13 02:23:16.366252 kubelet[2243]: E1213 02:23:16.366204 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:17.320858 (udev-worker)[2866]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:17.326076 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:23:17.326132 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:23:17.321578 (udev-worker)[2868]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:17.325810 systemd-networkd[1511]: cilium_host: Link UP Dec 13 02:23:17.326040 systemd-networkd[1511]: cilium_net: Link UP Dec 13 02:23:17.326238 systemd-networkd[1511]: cilium_net: Gained carrier Dec 13 02:23:17.326475 systemd-networkd[1511]: cilium_host: Gained carrier Dec 13 02:23:17.354677 systemd-networkd[1511]: cilium_net: Gained IPv6LL Dec 13 02:23:17.369377 kubelet[2243]: E1213 02:23:17.369320 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:17.534885 (udev-worker)[2924]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:17.548163 systemd-networkd[1511]: cilium_vxlan: Link UP Dec 13 02:23:17.548172 systemd-networkd[1511]: cilium_vxlan: Gained carrier Dec 13 02:23:17.852424 kernel: NET: Registered PF_ALG protocol family Dec 13 02:23:18.075517 systemd-networkd[1511]: cilium_host: Gained IPv6LL Dec 13 02:23:18.373661 kubelet[2243]: E1213 02:23:18.373552 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:18.650900 systemd-networkd[1511]: cilium_vxlan: Gained IPv6LL Dec 13 02:23:18.991244 systemd-networkd[1511]: lxc_health: Link UP Dec 13 02:23:19.012659 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:23:19.015178 systemd-networkd[1511]: lxc_health: Gained carrier Dec 13 02:23:19.379847 kubelet[2243]: E1213 02:23:19.379742 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:20.240740 kubelet[2243]: I1213 02:23:20.240530 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nhgdf" podStartSLOduration=12.008299836 podStartE2EDuration="25.240459703s" podCreationTimestamp="2024-12-13 02:22:55 +0000 UTC" firstStartedPulling="2024-12-13 02:22:57.417440932 +0000 UTC m=+2.751677210" lastFinishedPulling="2024-12-13 02:23:10.649600791 +0000 UTC m=+15.983837077" observedRunningTime="2024-12-13 02:23:15.689971562 +0000 UTC m=+21.024207860" watchObservedRunningTime="2024-12-13 02:23:20.240459703 +0000 UTC m=+25.574696005" Dec 13 02:23:20.241406 kubelet[2243]: I1213 02:23:20.241382 2243 topology_manager.go:215] "Topology Admit Handler" podUID="506f1f5b-27a3-4bee-9b06-c66692ab8a1c" podNamespace="default" podName="nginx-deployment-6d5f899847-wxj5n" Dec 13 02:23:20.351860 kubelet[2243]: I1213 02:23:20.351818 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwx5\" (UniqueName: \"kubernetes.io/projected/506f1f5b-27a3-4bee-9b06-c66692ab8a1c-kube-api-access-tfwx5\") pod \"nginx-deployment-6d5f899847-wxj5n\" (UID: \"506f1f5b-27a3-4bee-9b06-c66692ab8a1c\") " pod="default/nginx-deployment-6d5f899847-wxj5n" Dec 13 02:23:20.384038 kubelet[2243]: E1213 02:23:20.383981 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:20.546912 env[1835]: time="2024-12-13T02:23:20.546385016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-wxj5n,Uid:506f1f5b-27a3-4bee-9b06-c66692ab8a1c,Namespace:default,Attempt:0,}" Dec 13 02:23:20.623813 systemd-networkd[1511]: lxc0aff945f5bb1: Link UP Dec 13 02:23:20.637432 kernel: eth0: renamed from tmp93bf6 Dec 13 02:23:20.641034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:23:20.641128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0aff945f5bb1: link becomes ready Dec 13 02:23:20.640531 systemd-networkd[1511]: lxc0aff945f5bb1: Gained carrier Dec 13 02:23:20.972462 systemd-networkd[1511]: lxc_health: Gained IPv6LL Dec 13 02:23:21.385373 kubelet[2243]: E1213 02:23:21.385318 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:22.392760 kubelet[2243]: E1213 02:23:22.392706 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:22.542478 systemd-networkd[1511]: lxc0aff945f5bb1: Gained IPv6LL Dec 13 02:23:23.393114 kubelet[2243]: E1213 02:23:23.393068 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:24.393579 kubelet[2243]: E1213 02:23:24.393534 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:25.046881 env[1835]: time="2024-12-13T02:23:25.046796492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:25.046881 env[1835]: time="2024-12-13T02:23:25.046846392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:25.047490 env[1835]: time="2024-12-13T02:23:25.046861719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:25.047490 env[1835]: time="2024-12-13T02:23:25.047029711Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93bf6de1b836f63319fe6428106a68d07f183a494c4fce2888fe3ca21aa251a9 pid=3277 runtime=io.containerd.runc.v2 Dec 13 02:23:25.079533 systemd[1]: run-containerd-runc-k8s.io-93bf6de1b836f63319fe6428106a68d07f183a494c4fce2888fe3ca21aa251a9-runc.trwzGM.mount: Deactivated successfully. Dec 13 02:23:25.120266 env[1835]: time="2024-12-13T02:23:25.120204731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-wxj5n,Uid:506f1f5b-27a3-4bee-9b06-c66692ab8a1c,Namespace:default,Attempt:0,} returns sandbox id \"93bf6de1b836f63319fe6428106a68d07f183a494c4fce2888fe3ca21aa251a9\"" Dec 13 02:23:25.122220 env[1835]: time="2024-12-13T02:23:25.122164704Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:23:25.394537 kubelet[2243]: E1213 02:23:25.394412 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:26.395363 kubelet[2243]: E1213 02:23:26.395291 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:26.966532 update_engine[1829]: I1213 02:23:26.965826 1829 update_attempter.cc:509] Updating boot flags... Dec 13 02:23:27.397526 kubelet[2243]: E1213 02:23:27.397462 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:28.249095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675243732.mount: Deactivated successfully. Dec 13 02:23:28.398529 kubelet[2243]: E1213 02:23:28.398466 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:29.398763 kubelet[2243]: E1213 02:23:29.398697 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:30.253228 env[1835]: time="2024-12-13T02:23:30.253165504Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:30.258089 env[1835]: time="2024-12-13T02:23:30.258038402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:30.260271 env[1835]: time="2024-12-13T02:23:30.260214934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:30.262941 env[1835]: time="2024-12-13T02:23:30.262886175Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:30.264030 env[1835]: time="2024-12-13T02:23:30.263987561Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:23:30.266572 env[1835]: time="2024-12-13T02:23:30.266539146Z" level=info msg="CreateContainer within sandbox \"93bf6de1b836f63319fe6428106a68d07f183a494c4fce2888fe3ca21aa251a9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:23:30.288917 env[1835]: time="2024-12-13T02:23:30.288860895Z" level=info msg="CreateContainer within sandbox \"93bf6de1b836f63319fe6428106a68d07f183a494c4fce2888fe3ca21aa251a9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3aad8e7b10f17c642d832a25fc49cd75c1f651b107582d3ce086cd72725db525\"" Dec 13 02:23:30.289459 env[1835]: time="2024-12-13T02:23:30.289408602Z" level=info msg="StartContainer for \"3aad8e7b10f17c642d832a25fc49cd75c1f651b107582d3ce086cd72725db525\"" Dec 13 02:23:30.365048 env[1835]: time="2024-12-13T02:23:30.364918895Z" level=info msg="StartContainer for \"3aad8e7b10f17c642d832a25fc49cd75c1f651b107582d3ce086cd72725db525\" returns successfully" Dec 13 02:23:30.399875 kubelet[2243]: E1213 02:23:30.399749 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:30.689647 kubelet[2243]: I1213 02:23:30.689508 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-wxj5n" podStartSLOduration=5.546850611 podStartE2EDuration="10.68946914s" podCreationTimestamp="2024-12-13 02:23:20 +0000 UTC" firstStartedPulling="2024-12-13 02:23:25.121695576 +0000 UTC m=+30.455931852" lastFinishedPulling="2024-12-13 02:23:30.264314092 +0000 UTC m=+35.598550381" observedRunningTime="2024-12-13 02:23:30.689186091 +0000 UTC m=+36.023422388" watchObservedRunningTime="2024-12-13 02:23:30.68946914 +0000 UTC m=+36.023705437" Dec 13 02:23:31.400362 kubelet[2243]: E1213 02:23:31.400288 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:32.401044 kubelet[2243]: E1213 02:23:32.400986 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:33.401850 kubelet[2243]: E1213 02:23:33.401799 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:34.402556 kubelet[2243]: E1213 02:23:34.402504 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:35.336090 kubelet[2243]: E1213 02:23:35.336036 2243 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:35.402694 kubelet[2243]: E1213 02:23:35.402637 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:35.681932 kubelet[2243]: I1213 02:23:35.681803 2243 topology_manager.go:215] "Topology Admit Handler" podUID="7f74e663-dcdb-4b28-a870-1d5a36714728" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:23:35.773677 kubelet[2243]: I1213 02:23:35.773638 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcqm\" (UniqueName: \"kubernetes.io/projected/7f74e663-dcdb-4b28-a870-1d5a36714728-kube-api-access-gvcqm\") pod \"nfs-server-provisioner-0\" (UID: \"7f74e663-dcdb-4b28-a870-1d5a36714728\") " pod="default/nfs-server-provisioner-0" Dec 13 02:23:35.773677 kubelet[2243]: I1213 02:23:35.773695 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7f74e663-dcdb-4b28-a870-1d5a36714728-data\") pod \"nfs-server-provisioner-0\" (UID: \"7f74e663-dcdb-4b28-a870-1d5a36714728\") " pod="default/nfs-server-provisioner-0" Dec 13 02:23:35.988684 env[1835]: time="2024-12-13T02:23:35.988561605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f74e663-dcdb-4b28-a870-1d5a36714728,Namespace:default,Attempt:0,}" Dec 13 02:23:36.068367 systemd-networkd[1511]: lxc45fd59c1a564: Link UP Dec 13 02:23:36.078496 kernel: eth0: renamed from tmpd7da5 Dec 13 02:23:36.082238 (udev-worker)[3496]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:36.086445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:23:36.086570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45fd59c1a564: link becomes ready Dec 13 02:23:36.087801 systemd-networkd[1511]: lxc45fd59c1a564: Gained carrier Dec 13 02:23:36.316247 env[1835]: time="2024-12-13T02:23:36.316125691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:36.316640 env[1835]: time="2024-12-13T02:23:36.316219378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:36.316640 env[1835]: time="2024-12-13T02:23:36.316239717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:36.316946 env[1835]: time="2024-12-13T02:23:36.316847044Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7da5eaeeebb827736675c71a9aae2cbc4b81b8d1d9bf930e1f25afd4db15356 pid=3511 runtime=io.containerd.runc.v2 Dec 13 02:23:36.404212 kubelet[2243]: E1213 02:23:36.404183 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:36.412821 env[1835]: time="2024-12-13T02:23:36.412777604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7f74e663-dcdb-4b28-a870-1d5a36714728,Namespace:default,Attempt:0,} returns sandbox id \"d7da5eaeeebb827736675c71a9aae2cbc4b81b8d1d9bf930e1f25afd4db15356\"" Dec 13 02:23:36.414901 env[1835]: time="2024-12-13T02:23:36.414752575Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:23:36.892981 systemd[1]: run-containerd-runc-k8s.io-d7da5eaeeebb827736675c71a9aae2cbc4b81b8d1d9bf930e1f25afd4db15356-runc.j9Uo1X.mount: Deactivated successfully. Dec 13 02:23:37.405440 kubelet[2243]: E1213 02:23:37.405378 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:37.543874 amazon-ssm-agent[1811]: 2024-12-13 02:23:37 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:23:37.914666 systemd-networkd[1511]: lxc45fd59c1a564: Gained IPv6LL Dec 13 02:23:38.406535 kubelet[2243]: E1213 02:23:38.406468 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:39.407012 kubelet[2243]: E1213 02:23:39.406966 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:39.641359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4135717843.mount: Deactivated successfully. Dec 13 02:23:40.407562 kubelet[2243]: E1213 02:23:40.407501 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:41.407875 kubelet[2243]: E1213 02:23:41.407810 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:42.137835 env[1835]: time="2024-12-13T02:23:42.137779829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:42.149739 env[1835]: time="2024-12-13T02:23:42.149689304Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:42.157947 env[1835]: time="2024-12-13T02:23:42.157782512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:42.168587 env[1835]: time="2024-12-13T02:23:42.168542061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:42.170962 env[1835]: time="2024-12-13T02:23:42.170860337Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:23:42.185372 env[1835]: time="2024-12-13T02:23:42.185268816Z" level=info msg="CreateContainer within sandbox \"d7da5eaeeebb827736675c71a9aae2cbc4b81b8d1d9bf930e1f25afd4db15356\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:23:42.202916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793057345.mount: Deactivated successfully. Dec 13 02:23:42.219586 env[1835]: time="2024-12-13T02:23:42.219536860Z" level=info msg="CreateContainer within sandbox \"d7da5eaeeebb827736675c71a9aae2cbc4b81b8d1d9bf930e1f25afd4db15356\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"96ac2a8498fe7a44eb6a16d1242f34db812d03256065bfc1b81205f163f349a3\"" Dec 13 02:23:42.220395 env[1835]: time="2024-12-13T02:23:42.220300940Z" level=info msg="StartContainer for \"96ac2a8498fe7a44eb6a16d1242f34db812d03256065bfc1b81205f163f349a3\"" Dec 13 02:23:42.298846 env[1835]: time="2024-12-13T02:23:42.298793910Z" level=info msg="StartContainer for \"96ac2a8498fe7a44eb6a16d1242f34db812d03256065bfc1b81205f163f349a3\" returns successfully" Dec 13 02:23:42.408471 kubelet[2243]: E1213 02:23:42.408315 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:42.794495 kubelet[2243]: I1213 02:23:42.794455 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.037441023 podStartE2EDuration="7.794411332s" podCreationTimestamp="2024-12-13 02:23:35 +0000 UTC" firstStartedPulling="2024-12-13 02:23:36.414215125 +0000 UTC m=+41.748451400" lastFinishedPulling="2024-12-13 02:23:42.171185424 +0000 UTC m=+47.505421709" observedRunningTime="2024-12-13 02:23:42.793981367 +0000 UTC m=+48.128217669" watchObservedRunningTime="2024-12-13 02:23:42.794411332 +0000 UTC m=+48.128647629" Dec 13 02:23:43.195137 systemd[1]: run-containerd-runc-k8s.io-96ac2a8498fe7a44eb6a16d1242f34db812d03256065bfc1b81205f163f349a3-runc.KUUV2B.mount: Deactivated successfully. Dec 13 02:23:43.409245 kubelet[2243]: E1213 02:23:43.409194 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:44.410068 kubelet[2243]: E1213 02:23:44.410016 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:45.411047 kubelet[2243]: E1213 02:23:45.410985 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:46.412064 kubelet[2243]: E1213 02:23:46.412013 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:47.416364 kubelet[2243]: E1213 02:23:47.416300 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:47.693060 kubelet[2243]: I1213 02:23:47.692928 2243 topology_manager.go:215] "Topology Admit Handler" podUID="b8b859c8-a454-4881-a768-f55f3b2e7993" podNamespace="default" podName="test-pod-1" Dec 13 02:23:47.777621 kubelet[2243]: I1213 02:23:47.777572 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b34da1a6-cf1b-45b1-8fc4-e224c1ad444b\" (UniqueName: \"kubernetes.io/nfs/b8b859c8-a454-4881-a768-f55f3b2e7993-pvc-b34da1a6-cf1b-45b1-8fc4-e224c1ad444b\") pod \"test-pod-1\" (UID: \"b8b859c8-a454-4881-a768-f55f3b2e7993\") " pod="default/test-pod-1" Dec 13 02:23:47.777621 kubelet[2243]: I1213 02:23:47.777627 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5v6\" (UniqueName: \"kubernetes.io/projected/b8b859c8-a454-4881-a768-f55f3b2e7993-kube-api-access-9q5v6\") pod \"test-pod-1\" (UID: \"b8b859c8-a454-4881-a768-f55f3b2e7993\") " pod="default/test-pod-1" Dec 13 02:23:47.947410 kernel: FS-Cache: Loaded Dec 13 02:23:48.024359 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:23:48.024489 kernel: RPC: Registered udp transport module. Dec 13 02:23:48.024524 kernel: RPC: Registered tcp transport module. Dec 13 02:23:48.024587 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:23:48.117439 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:23:48.382178 kernel: NFS: Registering the id_resolver key type Dec 13 02:23:48.382316 kernel: Key type id_resolver registered Dec 13 02:23:48.384361 kernel: Key type id_legacy registered Dec 13 02:23:48.418366 kubelet[2243]: E1213 02:23:48.418250 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:48.439438 nfsidmap[3627]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 02:23:48.449109 nfsidmap[3628]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 02:23:48.599948 env[1835]: time="2024-12-13T02:23:48.599898565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b8b859c8-a454-4881-a768-f55f3b2e7993,Namespace:default,Attempt:0,}" Dec 13 02:23:48.638729 (udev-worker)[3613]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:48.640288 (udev-worker)[3622]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:48.644313 systemd-networkd[1511]: lxcd45acd97bf3a: Link UP Dec 13 02:23:48.651473 kernel: eth0: renamed from tmp8fbd6 Dec 13 02:23:48.666525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:23:48.666695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd45acd97bf3a: link becomes ready Dec 13 02:23:48.667771 systemd-networkd[1511]: lxcd45acd97bf3a: Gained carrier Dec 13 02:23:48.938762 env[1835]: time="2024-12-13T02:23:48.938594551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:48.938762 env[1835]: time="2024-12-13T02:23:48.938650390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:48.938762 env[1835]: time="2024-12-13T02:23:48.938665423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:48.939613 env[1835]: time="2024-12-13T02:23:48.939499849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fbd6ec902796aaef653a3654f05d45a503a306e09672c6dc94b9e8c47c55ce9 pid=3654 runtime=io.containerd.runc.v2 Dec 13 02:23:49.008572 systemd[1]: run-containerd-runc-k8s.io-8fbd6ec902796aaef653a3654f05d45a503a306e09672c6dc94b9e8c47c55ce9-runc.jQRLkC.mount: Deactivated successfully. Dec 13 02:23:49.080459 env[1835]: time="2024-12-13T02:23:49.080414253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b8b859c8-a454-4881-a768-f55f3b2e7993,Namespace:default,Attempt:0,} returns sandbox id \"8fbd6ec902796aaef653a3654f05d45a503a306e09672c6dc94b9e8c47c55ce9\"" Dec 13 02:23:49.083036 env[1835]: time="2024-12-13T02:23:49.083004265Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:23:49.403878 env[1835]: time="2024-12-13T02:23:49.403830273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:49.406154 env[1835]: time="2024-12-13T02:23:49.406108777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:49.408055 env[1835]: time="2024-12-13T02:23:49.408010108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:49.410006 env[1835]: time="2024-12-13T02:23:49.409963844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:23:49.410721 env[1835]: time="2024-12-13T02:23:49.410684837Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:23:49.413140 env[1835]: time="2024-12-13T02:23:49.413103720Z" level=info msg="CreateContainer within sandbox \"8fbd6ec902796aaef653a3654f05d45a503a306e09672c6dc94b9e8c47c55ce9\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:23:49.419460 kubelet[2243]: E1213 02:23:49.419421 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:49.434982 env[1835]: time="2024-12-13T02:23:49.434919219Z" level=info msg="CreateContainer within sandbox \"8fbd6ec902796aaef653a3654f05d45a503a306e09672c6dc94b9e8c47c55ce9\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"cdab84fc644cffe8be4fd96a7749dfcc268d204d0391a0d7b82b9eb90f301ca1\"" Dec 13 02:23:49.436284 env[1835]: time="2024-12-13T02:23:49.436233282Z" level=info msg="StartContainer for \"cdab84fc644cffe8be4fd96a7749dfcc268d204d0391a0d7b82b9eb90f301ca1\"" Dec 13 02:23:49.500509 env[1835]: time="2024-12-13T02:23:49.500453826Z" level=info msg="StartContainer for \"cdab84fc644cffe8be4fd96a7749dfcc268d204d0391a0d7b82b9eb90f301ca1\" returns successfully" Dec 13 02:23:49.813158 kubelet[2243]: I1213 02:23:49.812932 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=13.48427017 podStartE2EDuration="13.812892957s" podCreationTimestamp="2024-12-13 02:23:36 +0000 UTC" firstStartedPulling="2024-12-13 02:23:49.082366631 +0000 UTC m=+54.416602920" lastFinishedPulling="2024-12-13 02:23:49.410989421 +0000 UTC m=+54.745225707" observedRunningTime="2024-12-13 02:23:49.812627659 +0000 UTC m=+55.146863958" watchObservedRunningTime="2024-12-13 02:23:49.812892957 +0000 UTC m=+55.147129255" Dec 13 02:23:50.420090 kubelet[2243]: E1213 02:23:50.420038 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:50.586885 systemd-networkd[1511]: lxcd45acd97bf3a: Gained IPv6LL Dec 13 02:23:51.421109 kubelet[2243]: E1213 02:23:51.421048 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:52.421951 kubelet[2243]: E1213 02:23:52.421889 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:53.422427 kubelet[2243]: E1213 02:23:53.422376 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:54.422756 kubelet[2243]: E1213 02:23:54.422687 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:55.336328 kubelet[2243]: E1213 02:23:55.336270 2243 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:55.423579 kubelet[2243]: E1213 02:23:55.423525 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:56.424148 kubelet[2243]: E1213 02:23:56.424101 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:57.424899 kubelet[2243]: E1213 02:23:57.424848 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:58.425408 kubelet[2243]: E1213 02:23:58.425355 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:23:59.425624 kubelet[2243]: E1213 02:23:59.425569 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:00.426261 kubelet[2243]: E1213 02:24:00.426188 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:00.435330 env[1835]: time="2024-12-13T02:24:00.434854356Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:24:00.447682 env[1835]: time="2024-12-13T02:24:00.447643685Z" level=info msg="StopContainer for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" with timeout 2 (s)" Dec 13 02:24:00.448026 env[1835]: time="2024-12-13T02:24:00.447987285Z" level=info msg="Stop container \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" with signal terminated" Dec 13 02:24:00.457313 systemd-networkd[1511]: lxc_health: Link DOWN Dec 13 02:24:00.457325 systemd-networkd[1511]: lxc_health: Lost carrier Dec 13 02:24:00.476815 kubelet[2243]: E1213 02:24:00.476764 2243 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:00.610210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580-rootfs.mount: Deactivated successfully. Dec 13 02:24:00.640586 env[1835]: time="2024-12-13T02:24:00.640519888Z" level=info msg="shim disconnected" id=9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580 Dec 13 02:24:00.640586 env[1835]: time="2024-12-13T02:24:00.640576800Z" level=warning msg="cleaning up after shim disconnected" id=9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580 namespace=k8s.io Dec 13 02:24:00.640586 env[1835]: time="2024-12-13T02:24:00.640591165Z" level=info msg="cleaning up dead shim" Dec 13 02:24:00.677327 env[1835]: time="2024-12-13T02:24:00.676688707Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3788 runtime=io.containerd.runc.v2\n" Dec 13 02:24:00.680496 env[1835]: time="2024-12-13T02:24:00.680440790Z" level=info msg="StopContainer for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" returns successfully" Dec 13 02:24:00.682505 env[1835]: time="2024-12-13T02:24:00.682470064Z" level=info msg="StopPodSandbox for \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\"" Dec 13 02:24:00.682664 env[1835]: time="2024-12-13T02:24:00.682545122Z" level=info msg="Container to stop \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.682664 env[1835]: time="2024-12-13T02:24:00.682564891Z" level=info msg="Container to stop \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.682664 env[1835]: time="2024-12-13T02:24:00.682579872Z" level=info msg="Container to stop \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.682664 env[1835]: time="2024-12-13T02:24:00.682597023Z" level=info msg="Container to stop \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.682664 env[1835]: time="2024-12-13T02:24:00.682614372Z" level=info msg="Container to stop \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.685898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73-shm.mount: Deactivated successfully. Dec 13 02:24:00.716508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73-rootfs.mount: Deactivated successfully. Dec 13 02:24:00.726068 env[1835]: time="2024-12-13T02:24:00.726011990Z" level=info msg="shim disconnected" id=7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73 Dec 13 02:24:00.726288 env[1835]: time="2024-12-13T02:24:00.726077222Z" level=warning msg="cleaning up after shim disconnected" id=7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73 namespace=k8s.io Dec 13 02:24:00.726288 env[1835]: time="2024-12-13T02:24:00.726090317Z" level=info msg="cleaning up dead shim" Dec 13 02:24:00.735456 env[1835]: time="2024-12-13T02:24:00.735411460Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3826 runtime=io.containerd.runc.v2\n" Dec 13 02:24:00.736212 env[1835]: time="2024-12-13T02:24:00.736178333Z" level=info msg="TearDown network for sandbox \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" successfully" Dec 13 02:24:00.736310 env[1835]: time="2024-12-13T02:24:00.736209000Z" level=info msg="StopPodSandbox for \"7ea24f30a99f8b005f90fc03d1e868d69f6c4cfd9fd146f107ccc74994b7fa73\" returns successfully" Dec 13 02:24:00.820306 kubelet[2243]: I1213 02:24:00.820280 2243 scope.go:117] "RemoveContainer" containerID="9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580" Dec 13 02:24:00.821939 env[1835]: time="2024-12-13T02:24:00.821903751Z" level=info msg="RemoveContainer for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\"" Dec 13 02:24:00.829364 env[1835]: time="2024-12-13T02:24:00.829289343Z" level=info msg="RemoveContainer for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" returns successfully" Dec 13 02:24:00.829733 kubelet[2243]: I1213 02:24:00.829708 2243 scope.go:117] "RemoveContainer" containerID="8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7" Dec 13 02:24:00.831584 env[1835]: time="2024-12-13T02:24:00.831531446Z" level=info msg="RemoveContainer for \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\"" Dec 13 02:24:00.834881 env[1835]: time="2024-12-13T02:24:00.834836279Z" level=info msg="RemoveContainer for \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\" returns successfully" Dec 13 02:24:00.835071 kubelet[2243]: I1213 02:24:00.835046 2243 scope.go:117] "RemoveContainer" containerID="6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e" Dec 13 02:24:00.836358 env[1835]: time="2024-12-13T02:24:00.836313471Z" level=info msg="RemoveContainer for \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\"" Dec 13 02:24:00.839519 env[1835]: time="2024-12-13T02:24:00.839479538Z" level=info msg="RemoveContainer for \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\" returns successfully" Dec 13 02:24:00.839812 kubelet[2243]: I1213 02:24:00.839787 2243 scope.go:117] "RemoveContainer" containerID="eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a" Dec 13 02:24:00.841080 env[1835]: time="2024-12-13T02:24:00.841046111Z" level=info msg="RemoveContainer for \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\"" Dec 13 02:24:00.844144 env[1835]: time="2024-12-13T02:24:00.844096522Z" level=info msg="RemoveContainer for \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\" returns successfully" Dec 13 02:24:00.844382 kubelet[2243]: I1213 02:24:00.844337 2243 scope.go:117] "RemoveContainer" containerID="2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5" Dec 13 02:24:00.845582 env[1835]: time="2024-12-13T02:24:00.845547609Z" level=info msg="RemoveContainer for \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\"" Dec 13 02:24:00.848685 env[1835]: time="2024-12-13T02:24:00.848638854Z" level=info msg="RemoveContainer for \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\" returns successfully" Dec 13 02:24:00.848895 kubelet[2243]: I1213 02:24:00.848876 2243 scope.go:117] "RemoveContainer" containerID="9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580" Dec 13 02:24:00.849445 env[1835]: time="2024-12-13T02:24:00.849361889Z" level=error msg="ContainerStatus for \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\": not found" Dec 13 02:24:00.849586 kubelet[2243]: E1213 02:24:00.849564 2243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\": not found" containerID="9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580" Dec 13 02:24:00.849697 kubelet[2243]: I1213 02:24:00.849675 2243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580"} err="failed to get container status \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f05858d94239216191600e538a4e4163c6f0636f350a868439bd6269f9ef580\": not found" Dec 13 02:24:00.849853 kubelet[2243]: I1213 02:24:00.849700 2243 scope.go:117] "RemoveContainer" containerID="8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7" Dec 13 02:24:00.850036 env[1835]: time="2024-12-13T02:24:00.849975419Z" level=error msg="ContainerStatus for \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\": not found" Dec 13 02:24:00.850161 kubelet[2243]: E1213 02:24:00.850141 2243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\": not found" containerID="8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7" Dec 13 02:24:00.850238 kubelet[2243]: I1213 02:24:00.850183 2243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7"} err="failed to get container status \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8760dec4e2110a103e2a71ddff9e954c6b7a74bf2a03c7ff5e89ec70a4abdfb7\": not found" Dec 13 02:24:00.850238 kubelet[2243]: I1213 02:24:00.850199 2243 scope.go:117] "RemoveContainer" containerID="6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e" Dec 13 02:24:00.850546 env[1835]: time="2024-12-13T02:24:00.850475806Z" level=error msg="ContainerStatus for \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\": not found" Dec 13 02:24:00.850701 kubelet[2243]: E1213 02:24:00.850683 2243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\": not found" containerID="6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e" Dec 13 02:24:00.850786 kubelet[2243]: I1213 02:24:00.850716 2243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e"} err="failed to get container status \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e873376332f0334783cfd94f89d453573f0bea7274311e2ac150efa1ab77f8e\": not found" Dec 13 02:24:00.850786 kubelet[2243]: I1213 02:24:00.850731 2243 scope.go:117] "RemoveContainer" containerID="eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a" Dec 13 02:24:00.850967 env[1835]: time="2024-12-13T02:24:00.850908108Z" level=error msg="ContainerStatus for \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\": not found" Dec 13 02:24:00.851067 kubelet[2243]: E1213 02:24:00.851039 2243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\": not found" containerID="eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a" Dec 13 02:24:00.851137 kubelet[2243]: I1213 02:24:00.851071 2243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a"} err="failed to get container status \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb4ad5d98496eb6b9da7bf56b4578317166ef23d6e98444ca4e0e504b3854e3a\": not found" Dec 13 02:24:00.851137 kubelet[2243]: I1213 02:24:00.851086 2243 scope.go:117] "RemoveContainer" containerID="2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5" Dec 13 02:24:00.851307 env[1835]: time="2024-12-13T02:24:00.851253748Z" level=error msg="ContainerStatus for \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\": not found" Dec 13 02:24:00.851610 kubelet[2243]: E1213 02:24:00.851435 2243 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\": not found" containerID="2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5" Dec 13 02:24:00.851610 kubelet[2243]: I1213 02:24:00.851457 2243 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5"} err="failed to get container status \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2795a9344a2c7b21de48dc2aacf65a949176109e32195246c6118931aa8f45f5\": not found" Dec 13 02:24:00.871373 kubelet[2243]: I1213 02:24:00.868783 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfsh\" (UniqueName: \"kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-kube-api-access-6kfsh\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871373 kubelet[2243]: I1213 02:24:00.868867 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-cgroup\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871373 kubelet[2243]: I1213 02:24:00.868897 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-run\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871373 kubelet[2243]: I1213 02:24:00.868923 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-kernel\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871373 kubelet[2243]: I1213 02:24:00.868962 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-config-path\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871373 kubelet[2243]: I1213 02:24:00.868987 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-bpf-maps\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871833 kubelet[2243]: I1213 02:24:00.869013 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-xtables-lock\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871833 kubelet[2243]: I1213 02:24:00.869039 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-hostproc\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871833 kubelet[2243]: I1213 02:24:00.869064 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cni-path\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871833 kubelet[2243]: I1213 02:24:00.869090 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-etc-cni-netd\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871833 kubelet[2243]: I1213 02:24:00.869115 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-lib-modules\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.871833 kubelet[2243]: I1213 02:24:00.869151 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03534de-984c-4cee-9dea-c0718f3c32f6-clustermesh-secrets\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.872086 kubelet[2243]: I1213 02:24:00.869183 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-hubble-tls\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.872086 kubelet[2243]: I1213 02:24:00.869210 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-net\") pod \"a03534de-984c-4cee-9dea-c0718f3c32f6\" (UID: \"a03534de-984c-4cee-9dea-c0718f3c32f6\") " Dec 13 02:24:00.872086 kubelet[2243]: I1213 02:24:00.869259 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872086 kubelet[2243]: I1213 02:24:00.869314 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872086 kubelet[2243]: I1213 02:24:00.869376 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872311 kubelet[2243]: I1213 02:24:00.869401 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872311 kubelet[2243]: I1213 02:24:00.869699 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872311 kubelet[2243]: I1213 02:24:00.869753 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872311 kubelet[2243]: I1213 02:24:00.869779 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872311 kubelet[2243]: I1213 02:24:00.869802 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872565 kubelet[2243]: I1213 02:24:00.869840 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.872565 kubelet[2243]: I1213 02:24:00.869865 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:00.874215 kubelet[2243]: I1213 02:24:00.874177 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:00.877489 systemd[1]: var-lib-kubelet-pods-a03534de\x2d984c\x2d4cee\x2d9dea\x2dc0718f3c32f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6kfsh.mount: Deactivated successfully. Dec 13 02:24:00.882185 kubelet[2243]: I1213 02:24:00.875421 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-kube-api-access-6kfsh" (OuterVolumeSpecName: "kube-api-access-6kfsh") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "kube-api-access-6kfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:00.882557 kubelet[2243]: I1213 02:24:00.882518 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03534de-984c-4cee-9dea-c0718f3c32f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:00.885522 kubelet[2243]: I1213 02:24:00.885466 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a03534de-984c-4cee-9dea-c0718f3c32f6" (UID: "a03534de-984c-4cee-9dea-c0718f3c32f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.969932 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-cgroup\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970034 2243 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6kfsh\" (UniqueName: \"kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-kube-api-access-6kfsh\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970052 2243 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-bpf-maps\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970065 2243 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-xtables-lock\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970109 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-run\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970122 2243 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-kernel\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970135 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03534de-984c-4cee-9dea-c0718f3c32f6-cilium-config-path\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970289 kubelet[2243]: I1213 02:24:00.970150 2243 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-lib-modules\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970765 kubelet[2243]: I1213 02:24:00.970163 2243 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03534de-984c-4cee-9dea-c0718f3c32f6-clustermesh-secrets\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970765 kubelet[2243]: I1213 02:24:00.970175 2243 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-hostproc\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970765 kubelet[2243]: I1213 02:24:00.970187 2243 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-cni-path\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970765 kubelet[2243]: I1213 02:24:00.970201 2243 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-etc-cni-netd\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970765 kubelet[2243]: I1213 02:24:00.970216 2243 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03534de-984c-4cee-9dea-c0718f3c32f6-hubble-tls\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:00.970765 kubelet[2243]: I1213 02:24:00.970232 2243 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03534de-984c-4cee-9dea-c0718f3c32f6-host-proc-sys-net\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:01.374213 systemd[1]: var-lib-kubelet-pods-a03534de\x2d984c\x2d4cee\x2d9dea\x2dc0718f3c32f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:24:01.374426 systemd[1]: var-lib-kubelet-pods-a03534de\x2d984c\x2d4cee\x2d9dea\x2dc0718f3c32f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:01.427199 kubelet[2243]: E1213 02:24:01.427138 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:01.536954 kubelet[2243]: I1213 02:24:01.536910 2243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" path="/var/lib/kubelet/pods/a03534de-984c-4cee-9dea-c0718f3c32f6/volumes" Dec 13 02:24:02.428080 kubelet[2243]: E1213 02:24:02.428023 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:03.428806 kubelet[2243]: E1213 02:24:03.428749 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:04.119842 kubelet[2243]: I1213 02:24:04.119796 2243 topology_manager.go:215] "Topology Admit Handler" podUID="ce944c0a-96d7-4382-8060-ad1a06cd2d9e" podNamespace="kube-system" podName="cilium-cgd5w" Dec 13 02:24:04.120087 kubelet[2243]: E1213 02:24:04.119862 2243 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" containerName="mount-cgroup" Dec 13 02:24:04.120087 kubelet[2243]: E1213 02:24:04.119878 2243 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" containerName="mount-bpf-fs" Dec 13 02:24:04.120087 kubelet[2243]: E1213 02:24:04.119887 2243 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" containerName="clean-cilium-state" Dec 13 02:24:04.120087 kubelet[2243]: E1213 02:24:04.119896 2243 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" containerName="cilium-agent" Dec 13 02:24:04.120087 kubelet[2243]: E1213 02:24:04.119907 2243 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" containerName="apply-sysctl-overwrites" Dec 13 02:24:04.120087 kubelet[2243]: I1213 02:24:04.119941 2243 memory_manager.go:354] "RemoveStaleState removing state" podUID="a03534de-984c-4cee-9dea-c0718f3c32f6" containerName="cilium-agent" Dec 13 02:24:04.127437 kubelet[2243]: I1213 02:24:04.127398 2243 topology_manager.go:215] "Topology Admit Handler" podUID="bfeb8dea-84e8-4a41-ab4c-0a8605b69777" podNamespace="kube-system" podName="cilium-operator-5cc964979-vdlgg" Dec 13 02:24:04.193099 kubelet[2243]: I1213 02:24:04.193059 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hostproc\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193403 kubelet[2243]: I1213 02:24:04.193380 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-cgroup\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193511 kubelet[2243]: I1213 02:24:04.193426 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-net\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193511 kubelet[2243]: I1213 02:24:04.193458 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f4cb\" (UniqueName: \"kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-kube-api-access-6f4cb\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193511 kubelet[2243]: I1213 02:24:04.193487 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-bpf-maps\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193658 kubelet[2243]: I1213 02:24:04.193516 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-etc-cni-netd\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193658 kubelet[2243]: I1213 02:24:04.193546 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-clustermesh-secrets\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193658 kubelet[2243]: I1213 02:24:04.193577 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-kernel\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193658 kubelet[2243]: I1213 02:24:04.193607 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hubble-tls\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193658 kubelet[2243]: I1213 02:24:04.193638 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-run\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193873 kubelet[2243]: I1213 02:24:04.193672 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfeb8dea-84e8-4a41-ab4c-0a8605b69777-cilium-config-path\") pod \"cilium-operator-5cc964979-vdlgg\" (UID: \"bfeb8dea-84e8-4a41-ab4c-0a8605b69777\") " pod="kube-system/cilium-operator-5cc964979-vdlgg" Dec 13 02:24:04.193873 kubelet[2243]: I1213 02:24:04.193708 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cni-path\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193873 kubelet[2243]: I1213 02:24:04.193740 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-xtables-lock\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.193873 kubelet[2243]: I1213 02:24:04.193772 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6xm\" (UniqueName: \"kubernetes.io/projected/bfeb8dea-84e8-4a41-ab4c-0a8605b69777-kube-api-access-pc6xm\") pod \"cilium-operator-5cc964979-vdlgg\" (UID: \"bfeb8dea-84e8-4a41-ab4c-0a8605b69777\") " pod="kube-system/cilium-operator-5cc964979-vdlgg" Dec 13 02:24:04.193873 kubelet[2243]: I1213 02:24:04.193803 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-config-path\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.194100 kubelet[2243]: I1213 02:24:04.193840 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-ipsec-secrets\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.194100 kubelet[2243]: I1213 02:24:04.193880 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-lib-modules\") pod \"cilium-cgd5w\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " pod="kube-system/cilium-cgd5w" Dec 13 02:24:04.346863 kubelet[2243]: E1213 02:24:04.346825 2243 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets clustermesh-secrets hubble-tls kube-api-access-6f4cb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-cgd5w" podUID="ce944c0a-96d7-4382-8060-ad1a06cd2d9e" Dec 13 02:24:04.430047 kubelet[2243]: E1213 02:24:04.429933 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:04.434779 env[1835]: time="2024-12-13T02:24:04.434729919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vdlgg,Uid:bfeb8dea-84e8-4a41-ab4c-0a8605b69777,Namespace:kube-system,Attempt:0,}" Dec 13 02:24:04.449428 env[1835]: time="2024-12-13T02:24:04.449330759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:24:04.449714 env[1835]: time="2024-12-13T02:24:04.449400603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:24:04.449714 env[1835]: time="2024-12-13T02:24:04.449415716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:24:04.449714 env[1835]: time="2024-12-13T02:24:04.449649512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f3d9d2cdbb5efe1db49b4b698de182701c22d6b1e081c824ae4bd0a79801b01 pid=3859 runtime=io.containerd.runc.v2 Dec 13 02:24:04.515837 env[1835]: time="2024-12-13T02:24:04.515631465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-vdlgg,Uid:bfeb8dea-84e8-4a41-ab4c-0a8605b69777,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f3d9d2cdbb5efe1db49b4b698de182701c22d6b1e081c824ae4bd0a79801b01\"" Dec 13 02:24:04.517921 env[1835]: time="2024-12-13T02:24:04.517784069Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:24:04.901903 kubelet[2243]: I1213 02:24:04.901847 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cni-path\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.901903 kubelet[2243]: I1213 02:24:04.901906 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-kernel\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902155 kubelet[2243]: I1213 02:24:04.901933 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-xtables-lock\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902155 kubelet[2243]: I1213 02:24:04.901964 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-ipsec-secrets\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902155 kubelet[2243]: I1213 02:24:04.901990 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-cgroup\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902155 kubelet[2243]: I1213 02:24:04.902016 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-lib-modules\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902155 kubelet[2243]: I1213 02:24:04.902045 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hostproc\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902155 kubelet[2243]: I1213 02:24:04.902071 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hubble-tls\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902465 kubelet[2243]: I1213 02:24:04.902096 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-run\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902465 kubelet[2243]: I1213 02:24:04.902122 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-net\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902465 kubelet[2243]: I1213 02:24:04.902150 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f4cb\" (UniqueName: \"kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-kube-api-access-6f4cb\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902465 kubelet[2243]: I1213 02:24:04.902181 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-clustermesh-secrets\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902465 kubelet[2243]: I1213 02:24:04.902212 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-config-path\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902465 kubelet[2243]: I1213 02:24:04.902240 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-bpf-maps\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902709 kubelet[2243]: I1213 02:24:04.902267 2243 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-etc-cni-netd\") pod \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\" (UID: \"ce944c0a-96d7-4382-8060-ad1a06cd2d9e\") " Dec 13 02:24:04.902709 kubelet[2243]: I1213 02:24:04.902369 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.902709 kubelet[2243]: I1213 02:24:04.902417 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cni-path" (OuterVolumeSpecName: "cni-path") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.902709 kubelet[2243]: I1213 02:24:04.902440 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.902709 kubelet[2243]: I1213 02:24:04.902466 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.905386 kubelet[2243]: I1213 02:24:04.902954 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.905386 kubelet[2243]: I1213 02:24:04.902999 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.905386 kubelet[2243]: I1213 02:24:04.903022 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.905386 kubelet[2243]: I1213 02:24:04.903046 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hostproc" (OuterVolumeSpecName: "hostproc") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.907832 kubelet[2243]: I1213 02:24:04.907788 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:04.910224 kubelet[2243]: I1213 02:24:04.907860 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.910687 kubelet[2243]: I1213 02:24:04.910649 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:04.913459 kubelet[2243]: I1213 02:24:04.913413 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:04.914028 kubelet[2243]: I1213 02:24:04.913990 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-kube-api-access-6f4cb" (OuterVolumeSpecName: "kube-api-access-6f4cb") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "kube-api-access-6f4cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:04.914124 kubelet[2243]: I1213 02:24:04.914046 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:04.920128 kubelet[2243]: I1213 02:24:04.920081 2243 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ce944c0a-96d7-4382-8060-ad1a06cd2d9e" (UID: "ce944c0a-96d7-4382-8060-ad1a06cd2d9e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:05.002502 kubelet[2243]: I1213 02:24:05.002453 2243 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-bpf-maps\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002502 kubelet[2243]: I1213 02:24:05.002499 2243 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-etc-cni-netd\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002502 kubelet[2243]: I1213 02:24:05.002515 2243 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-clustermesh-secrets\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002529 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-config-path\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002542 2243 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cni-path\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002555 2243 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-xtables-lock\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002567 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-ipsec-secrets\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002581 2243 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-kernel\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002594 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-cgroup\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002605 2243 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-lib-modules\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002775 kubelet[2243]: I1213 02:24:05.002622 2243 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hubble-tls\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002977 kubelet[2243]: I1213 02:24:05.002635 2243 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-hostproc\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002977 kubelet[2243]: I1213 02:24:05.002649 2243 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6f4cb\" (UniqueName: \"kubernetes.io/projected/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-kube-api-access-6f4cb\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002977 kubelet[2243]: I1213 02:24:05.002663 2243 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-cilium-run\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.002977 kubelet[2243]: I1213 02:24:05.002678 2243 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ce944c0a-96d7-4382-8060-ad1a06cd2d9e-host-proc-sys-net\") on node \"172.31.24.110\" DevicePath \"\"" Dec 13 02:24:05.309827 systemd[1]: var-lib-kubelet-pods-ce944c0a\x2d96d7\x2d4382\x2d8060\x2dad1a06cd2d9e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:05.310025 systemd[1]: var-lib-kubelet-pods-ce944c0a\x2d96d7\x2d4382\x2d8060\x2dad1a06cd2d9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6f4cb.mount: Deactivated successfully. Dec 13 02:24:05.310166 systemd[1]: var-lib-kubelet-pods-ce944c0a\x2d96d7\x2d4382\x2d8060\x2dad1a06cd2d9e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:24:05.310286 systemd[1]: var-lib-kubelet-pods-ce944c0a\x2d96d7\x2d4382\x2d8060\x2dad1a06cd2d9e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:05.430237 kubelet[2243]: E1213 02:24:05.430182 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:05.478425 kubelet[2243]: E1213 02:24:05.478388 2243 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:05.885762 kubelet[2243]: I1213 02:24:05.885719 2243 topology_manager.go:215] "Topology Admit Handler" podUID="423fc84d-89e7-45d0-a715-de8a605dd8be" podNamespace="kube-system" podName="cilium-h9qvs" Dec 13 02:24:06.010559 kubelet[2243]: I1213 02:24:06.010510 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-lib-modules\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.010559 kubelet[2243]: I1213 02:24:06.010565 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-cilium-run\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.010769 kubelet[2243]: I1213 02:24:06.010593 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/423fc84d-89e7-45d0-a715-de8a605dd8be-cilium-config-path\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.010769 kubelet[2243]: I1213 02:24:06.010619 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-bpf-maps\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.010769 kubelet[2243]: I1213 02:24:06.010666 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/423fc84d-89e7-45d0-a715-de8a605dd8be-clustermesh-secrets\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.010769 kubelet[2243]: I1213 02:24:06.010696 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-etc-cni-netd\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.010769 kubelet[2243]: I1213 02:24:06.010724 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-host-proc-sys-kernel\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011018 kubelet[2243]: I1213 02:24:06.010774 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-cni-path\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011018 kubelet[2243]: I1213 02:24:06.010803 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-cilium-cgroup\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011018 kubelet[2243]: I1213 02:24:06.010836 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-hostproc\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011018 kubelet[2243]: I1213 02:24:06.010866 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/423fc84d-89e7-45d0-a715-de8a605dd8be-cilium-ipsec-secrets\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011018 kubelet[2243]: I1213 02:24:06.010895 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/423fc84d-89e7-45d0-a715-de8a605dd8be-hubble-tls\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011018 kubelet[2243]: I1213 02:24:06.010929 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-host-proc-sys-net\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011409 kubelet[2243]: I1213 02:24:06.010967 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdzb6\" (UniqueName: \"kubernetes.io/projected/423fc84d-89e7-45d0-a715-de8a605dd8be-kube-api-access-hdzb6\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.011409 kubelet[2243]: I1213 02:24:06.010998 2243 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/423fc84d-89e7-45d0-a715-de8a605dd8be-xtables-lock\") pod \"cilium-h9qvs\" (UID: \"423fc84d-89e7-45d0-a715-de8a605dd8be\") " pod="kube-system/cilium-h9qvs" Dec 13 02:24:06.336280 kubelet[2243]: I1213 02:24:06.332193 2243 setters.go:568] "Node became not ready" node="172.31.24.110" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:24:06Z","lastTransitionTime":"2024-12-13T02:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:24:06.430931 kubelet[2243]: E1213 02:24:06.430884 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:06.489087 env[1835]: time="2024-12-13T02:24:06.489036747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9qvs,Uid:423fc84d-89e7-45d0-a715-de8a605dd8be,Namespace:kube-system,Attempt:0,}" Dec 13 02:24:06.521075 env[1835]: time="2024-12-13T02:24:06.519999101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:24:06.521075 env[1835]: time="2024-12-13T02:24:06.520236937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:24:06.521075 env[1835]: time="2024-12-13T02:24:06.520256450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:24:06.521075 env[1835]: time="2024-12-13T02:24:06.520539767Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7 pid=3911 runtime=io.containerd.runc.v2 Dec 13 02:24:06.569016 systemd[1]: run-containerd-runc-k8s.io-8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7-runc.L9A8Ja.mount: Deactivated successfully. Dec 13 02:24:06.642577 env[1835]: time="2024-12-13T02:24:06.642283955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9qvs,Uid:423fc84d-89e7-45d0-a715-de8a605dd8be,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\"" Dec 13 02:24:06.650999 env[1835]: time="2024-12-13T02:24:06.650949493Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:24:06.671030 env[1835]: time="2024-12-13T02:24:06.669056488Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7bdb1ea62ce1b86eae08d76d37c983bd634ec30d676571f2518c9fd91c089d2\"" Dec 13 02:24:06.671030 env[1835]: time="2024-12-13T02:24:06.670985929Z" level=info msg="StartContainer for \"a7bdb1ea62ce1b86eae08d76d37c983bd634ec30d676571f2518c9fd91c089d2\"" Dec 13 02:24:06.766325 env[1835]: time="2024-12-13T02:24:06.766089608Z" level=info msg="StartContainer for \"a7bdb1ea62ce1b86eae08d76d37c983bd634ec30d676571f2518c9fd91c089d2\" returns successfully" Dec 13 02:24:06.848496 env[1835]: time="2024-12-13T02:24:06.848437266Z" level=info msg="shim disconnected" id=a7bdb1ea62ce1b86eae08d76d37c983bd634ec30d676571f2518c9fd91c089d2 Dec 13 02:24:06.848496 env[1835]: time="2024-12-13T02:24:06.848493234Z" level=warning msg="cleaning up after shim disconnected" id=a7bdb1ea62ce1b86eae08d76d37c983bd634ec30d676571f2518c9fd91c089d2 namespace=k8s.io Dec 13 02:24:06.848805 env[1835]: time="2024-12-13T02:24:06.848506177Z" level=info msg="cleaning up dead shim" Dec 13 02:24:06.868159 env[1835]: time="2024-12-13T02:24:06.868106353Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Dec 13 02:24:07.431457 kubelet[2243]: E1213 02:24:07.431407 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:07.537244 kubelet[2243]: I1213 02:24:07.537196 2243 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ce944c0a-96d7-4382-8060-ad1a06cd2d9e" path="/var/lib/kubelet/pods/ce944c0a-96d7-4382-8060-ad1a06cd2d9e/volumes" Dec 13 02:24:07.845618 env[1835]: time="2024-12-13T02:24:07.845571269Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:24:07.865533 env[1835]: time="2024-12-13T02:24:07.865485438Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76c349d00f1201327a6b22238e300f7d056366fe7c091f8ea9e99f42ce494bc0\"" Dec 13 02:24:07.866259 env[1835]: time="2024-12-13T02:24:07.866219800Z" level=info msg="StartContainer for \"76c349d00f1201327a6b22238e300f7d056366fe7c091f8ea9e99f42ce494bc0\"" Dec 13 02:24:07.969156 env[1835]: time="2024-12-13T02:24:07.969100268Z" level=info msg="StartContainer for \"76c349d00f1201327a6b22238e300f7d056366fe7c091f8ea9e99f42ce494bc0\" returns successfully" Dec 13 02:24:08.015082 env[1835]: time="2024-12-13T02:24:08.015024135Z" level=info msg="shim disconnected" id=76c349d00f1201327a6b22238e300f7d056366fe7c091f8ea9e99f42ce494bc0 Dec 13 02:24:08.015082 env[1835]: time="2024-12-13T02:24:08.015082875Z" level=warning msg="cleaning up after shim disconnected" id=76c349d00f1201327a6b22238e300f7d056366fe7c091f8ea9e99f42ce494bc0 namespace=k8s.io Dec 13 02:24:08.015413 env[1835]: time="2024-12-13T02:24:08.015095993Z" level=info msg="cleaning up dead shim" Dec 13 02:24:08.025177 env[1835]: time="2024-12-13T02:24:08.025012804Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4062 runtime=io.containerd.runc.v2\n" Dec 13 02:24:08.431781 kubelet[2243]: E1213 02:24:08.431730 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:08.500409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76c349d00f1201327a6b22238e300f7d056366fe7c091f8ea9e99f42ce494bc0-rootfs.mount: Deactivated successfully. Dec 13 02:24:08.845648 env[1835]: time="2024-12-13T02:24:08.845606705Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:24:08.885578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842401631.mount: Deactivated successfully. Dec 13 02:24:08.899280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174045739.mount: Deactivated successfully. Dec 13 02:24:08.912122 env[1835]: time="2024-12-13T02:24:08.912045233Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c58bfa40ab2c62dec22e1b26c559a59039b96f0c0918d0dcb2fbc94dd3417af5\"" Dec 13 02:24:08.913338 env[1835]: time="2024-12-13T02:24:08.913293527Z" level=info msg="StartContainer for \"c58bfa40ab2c62dec22e1b26c559a59039b96f0c0918d0dcb2fbc94dd3417af5\"" Dec 13 02:24:09.032253 env[1835]: time="2024-12-13T02:24:09.030932550Z" level=info msg="StartContainer for \"c58bfa40ab2c62dec22e1b26c559a59039b96f0c0918d0dcb2fbc94dd3417af5\" returns successfully" Dec 13 02:24:09.132992 env[1835]: time="2024-12-13T02:24:09.132838038Z" level=info msg="shim disconnected" id=c58bfa40ab2c62dec22e1b26c559a59039b96f0c0918d0dcb2fbc94dd3417af5 Dec 13 02:24:09.132992 env[1835]: time="2024-12-13T02:24:09.132914675Z" level=warning msg="cleaning up after shim disconnected" id=c58bfa40ab2c62dec22e1b26c559a59039b96f0c0918d0dcb2fbc94dd3417af5 namespace=k8s.io Dec 13 02:24:09.132992 env[1835]: time="2024-12-13T02:24:09.132929297Z" level=info msg="cleaning up dead shim" Dec 13 02:24:09.155690 env[1835]: time="2024-12-13T02:24:09.155639589Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Dec 13 02:24:09.433027 kubelet[2243]: E1213 02:24:09.432880 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:09.702832 env[1835]: time="2024-12-13T02:24:09.702706072Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:24:09.707407 env[1835]: time="2024-12-13T02:24:09.707363271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:24:09.710770 env[1835]: time="2024-12-13T02:24:09.710725230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:24:09.711740 env[1835]: time="2024-12-13T02:24:09.711693046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:24:09.714672 env[1835]: time="2024-12-13T02:24:09.714635698Z" level=info msg="CreateContainer within sandbox \"5f3d9d2cdbb5efe1db49b4b698de182701c22d6b1e081c824ae4bd0a79801b01\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:24:09.732784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000078701.mount: Deactivated successfully. Dec 13 02:24:09.747171 env[1835]: time="2024-12-13T02:24:09.747125587Z" level=info msg="CreateContainer within sandbox \"5f3d9d2cdbb5efe1db49b4b698de182701c22d6b1e081c824ae4bd0a79801b01\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278\"" Dec 13 02:24:09.748002 env[1835]: time="2024-12-13T02:24:09.747961183Z" level=info msg="StartContainer for \"08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278\"" Dec 13 02:24:09.824299 env[1835]: time="2024-12-13T02:24:09.824245619Z" level=info msg="StartContainer for \"08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278\" returns successfully" Dec 13 02:24:09.856381 env[1835]: time="2024-12-13T02:24:09.856324463Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:24:09.883003 env[1835]: time="2024-12-13T02:24:09.882948925Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bcf8fb32fd0d9336ac9a89aa8e8be96ad1a77a4d385344d55228c2f3c794bb1c\"" Dec 13 02:24:09.887759 env[1835]: time="2024-12-13T02:24:09.887584497Z" level=info msg="StartContainer for \"bcf8fb32fd0d9336ac9a89aa8e8be96ad1a77a4d385344d55228c2f3c794bb1c\"" Dec 13 02:24:09.928403 kubelet[2243]: I1213 02:24:09.928330 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-vdlgg" podStartSLOduration=1.7332420160000002 podStartE2EDuration="6.928292043s" podCreationTimestamp="2024-12-13 02:24:03 +0000 UTC" firstStartedPulling="2024-12-13 02:24:04.517193598 +0000 UTC m=+69.851429873" lastFinishedPulling="2024-12-13 02:24:09.712243602 +0000 UTC m=+75.046479900" observedRunningTime="2024-12-13 02:24:09.866334089 +0000 UTC m=+75.200570389" watchObservedRunningTime="2024-12-13 02:24:09.928292043 +0000 UTC m=+75.262528340" Dec 13 02:24:09.982880 env[1835]: time="2024-12-13T02:24:09.982779908Z" level=info msg="StartContainer for \"bcf8fb32fd0d9336ac9a89aa8e8be96ad1a77a4d385344d55228c2f3c794bb1c\" returns successfully" Dec 13 02:24:10.027026 env[1835]: time="2024-12-13T02:24:10.026975240Z" level=info msg="shim disconnected" id=bcf8fb32fd0d9336ac9a89aa8e8be96ad1a77a4d385344d55228c2f3c794bb1c Dec 13 02:24:10.027394 env[1835]: time="2024-12-13T02:24:10.027370164Z" level=warning msg="cleaning up after shim disconnected" id=bcf8fb32fd0d9336ac9a89aa8e8be96ad1a77a4d385344d55228c2f3c794bb1c namespace=k8s.io Dec 13 02:24:10.027498 env[1835]: time="2024-12-13T02:24:10.027483631Z" level=info msg="cleaning up dead shim" Dec 13 02:24:10.045929 env[1835]: time="2024-12-13T02:24:10.045882967Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4213 runtime=io.containerd.runc.v2\n" Dec 13 02:24:10.433317 kubelet[2243]: E1213 02:24:10.433272 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:10.481069 kubelet[2243]: E1213 02:24:10.480860 2243 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:10.870150 env[1835]: time="2024-12-13T02:24:10.870107472Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:24:10.904490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244840239.mount: Deactivated successfully. Dec 13 02:24:10.905919 env[1835]: time="2024-12-13T02:24:10.905867169Z" level=info msg="CreateContainer within sandbox \"8cba37b23884074d4339a63dcd3bd2d82f289d138acf118a1fdd5ffcde9cd1e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7e07abced8fb6187fb47fdee73aa4cd2b102564580d04fdf117ba94f18a72d1\"" Dec 13 02:24:10.911620 env[1835]: time="2024-12-13T02:24:10.911570719Z" level=info msg="StartContainer for \"d7e07abced8fb6187fb47fdee73aa4cd2b102564580d04fdf117ba94f18a72d1\"" Dec 13 02:24:11.016630 env[1835]: time="2024-12-13T02:24:11.016571624Z" level=info msg="StartContainer for \"d7e07abced8fb6187fb47fdee73aa4cd2b102564580d04fdf117ba94f18a72d1\" returns successfully" Dec 13 02:24:11.433898 kubelet[2243]: E1213 02:24:11.433856 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:11.789373 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:24:11.931880 kubelet[2243]: I1213 02:24:11.931845 2243 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h9qvs" podStartSLOduration=6.931777675 podStartE2EDuration="6.931777675s" podCreationTimestamp="2024-12-13 02:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:24:11.930892993 +0000 UTC m=+77.265129296" watchObservedRunningTime="2024-12-13 02:24:11.931777675 +0000 UTC m=+77.266013974" Dec 13 02:24:12.434101 kubelet[2243]: E1213 02:24:12.434047 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:13.076359 systemd[1]: run-containerd-runc-k8s.io-d7e07abced8fb6187fb47fdee73aa4cd2b102564580d04fdf117ba94f18a72d1-runc.zJptdO.mount: Deactivated successfully. Dec 13 02:24:13.435100 kubelet[2243]: E1213 02:24:13.434583 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:14.435741 kubelet[2243]: E1213 02:24:14.435701 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:15.337079 kubelet[2243]: E1213 02:24:15.337029 2243 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:15.437004 kubelet[2243]: E1213 02:24:15.436967 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:15.447156 systemd-networkd[1511]: lxc_health: Link UP Dec 13 02:24:15.461894 (udev-worker)[4769]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:24:15.478163 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:24:15.477555 systemd-networkd[1511]: lxc_health: Gained carrier Dec 13 02:24:15.696938 kubelet[2243]: E1213 02:24:15.696433 2243 upgradeaware.go:439] Error proxying data from backend to client: write tcp 172.31.24.110:10250->172.31.30.169:44952: write: connection reset by peer Dec 13 02:24:16.438563 kubelet[2243]: E1213 02:24:16.438518 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:17.006524 systemd-networkd[1511]: lxc_health: Gained IPv6LL Dec 13 02:24:17.438842 kubelet[2243]: E1213 02:24:17.438722 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:18.439251 kubelet[2243]: E1213 02:24:18.439206 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:19.439462 kubelet[2243]: E1213 02:24:19.439374 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:20.440626 kubelet[2243]: E1213 02:24:20.440462 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:21.441026 kubelet[2243]: E1213 02:24:21.440983 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:22.442430 kubelet[2243]: E1213 02:24:22.442382 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:22.481821 systemd[1]: run-containerd-runc-k8s.io-d7e07abced8fb6187fb47fdee73aa4cd2b102564580d04fdf117ba94f18a72d1-runc.xt3yZp.mount: Deactivated successfully. Dec 13 02:24:23.443008 kubelet[2243]: E1213 02:24:23.442951 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:24.443807 kubelet[2243]: E1213 02:24:24.443752 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:25.444952 kubelet[2243]: E1213 02:24:25.444899 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:26.445316 kubelet[2243]: E1213 02:24:26.445263 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:27.446392 kubelet[2243]: E1213 02:24:27.446338 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:28.447635 kubelet[2243]: E1213 02:24:28.447583 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:29.448592 kubelet[2243]: E1213 02:24:29.448538 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:30.448903 kubelet[2243]: E1213 02:24:30.448849 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:31.449278 kubelet[2243]: E1213 02:24:31.449227 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:32.449597 kubelet[2243]: E1213 02:24:32.449543 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:33.450701 kubelet[2243]: E1213 02:24:33.450646 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:34.451090 kubelet[2243]: E1213 02:24:34.451037 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:35.335686 kubelet[2243]: E1213 02:24:35.335635 2243 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:35.451413 kubelet[2243]: E1213 02:24:35.451362 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:36.384825 kubelet[2243]: E1213 02:24:36.384783 2243 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.385302 kubelet[2243]: E1213 02:24:36.385282 2243 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.385882 kubelet[2243]: E1213 02:24:36.385853 2243 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.386257 kubelet[2243]: E1213 02:24:36.386227 2243 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.386701 kubelet[2243]: E1213 02:24:36.386672 2243 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.386701 kubelet[2243]: I1213 02:24:36.386699 2243 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 02:24:36.387089 kubelet[2243]: E1213 02:24:36.387061 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="200ms" Dec 13 02:24:36.451904 kubelet[2243]: E1213 02:24:36.451862 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:36.588638 kubelet[2243]: E1213 02:24:36.588594 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="400ms" Dec 13 02:24:36.708099 kubelet[2243]: E1213 02:24:36.707990 2243 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.24.110\": Get \"https://172.31.30.169:6443/api/v1/nodes/172.31.24.110?resourceVersion=0&timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.709390 kubelet[2243]: E1213 02:24:36.709272 2243 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.24.110\": Get \"https://172.31.30.169:6443/api/v1/nodes/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.709807 kubelet[2243]: E1213 02:24:36.709785 2243 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.24.110\": Get \"https://172.31.30.169:6443/api/v1/nodes/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.710142 kubelet[2243]: E1213 02:24:36.710119 2243 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.24.110\": Get \"https://172.31.30.169:6443/api/v1/nodes/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.710766 kubelet[2243]: E1213 02:24:36.710742 2243 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.24.110\": Get \"https://172.31.30.169:6443/api/v1/nodes/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" Dec 13 02:24:36.710766 kubelet[2243]: E1213 02:24:36.710766 2243 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Dec 13 02:24:36.991254 kubelet[2243]: E1213 02:24:36.991148 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.24.110?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="800ms" Dec 13 02:24:37.452299 kubelet[2243]: E1213 02:24:37.452244 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:38.452955 kubelet[2243]: E1213 02:24:38.452901 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:38.997555 amazon-ssm-agent[1811]: 2024-12-13 02:24:38 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:24:39.453920 kubelet[2243]: E1213 02:24:39.453862 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:40.454188 kubelet[2243]: E1213 02:24:40.454143 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:41.454636 kubelet[2243]: E1213 02:24:41.454582 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:42.070495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278-rootfs.mount: Deactivated successfully. Dec 13 02:24:42.086028 env[1835]: time="2024-12-13T02:24:42.085828691Z" level=info msg="shim disconnected" id=08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278 Dec 13 02:24:42.086028 env[1835]: time="2024-12-13T02:24:42.086011164Z" level=warning msg="cleaning up after shim disconnected" id=08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278 namespace=k8s.io Dec 13 02:24:42.086028 env[1835]: time="2024-12-13T02:24:42.086029410Z" level=info msg="cleaning up dead shim" Dec 13 02:24:42.099922 env[1835]: time="2024-12-13T02:24:42.099828302Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4909 runtime=io.containerd.runc.v2\n" Dec 13 02:24:42.455202 kubelet[2243]: E1213 02:24:42.455067 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:42.957363 kubelet[2243]: I1213 02:24:42.957310 2243 scope.go:117] "RemoveContainer" containerID="08612d7e1c2ddaf34461dcde50338c65f023f6079d2e3c22b3b82036ff133278" Dec 13 02:24:42.985336 env[1835]: time="2024-12-13T02:24:42.985232604Z" level=info msg="CreateContainer within sandbox \"5f3d9d2cdbb5efe1db49b4b698de182701c22d6b1e081c824ae4bd0a79801b01\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Dec 13 02:24:43.055890 env[1835]: time="2024-12-13T02:24:43.055826667Z" level=info msg="CreateContainer within sandbox \"5f3d9d2cdbb5efe1db49b4b698de182701c22d6b1e081c824ae4bd0a79801b01\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"b1e084c06a1fabd30e228d6660191239d4fe7b255170a69be559f85b8ecbce34\"" Dec 13 02:24:43.056921 env[1835]: time="2024-12-13T02:24:43.056881216Z" level=info msg="StartContainer for \"b1e084c06a1fabd30e228d6660191239d4fe7b255170a69be559f85b8ecbce34\"" Dec 13 02:24:43.116565 systemd[1]: run-containerd-runc-k8s.io-b1e084c06a1fabd30e228d6660191239d4fe7b255170a69be559f85b8ecbce34-runc.qSm3Ba.mount: Deactivated successfully. Dec 13 02:24:43.179017 env[1835]: time="2024-12-13T02:24:43.178189293Z" level=info msg="StartContainer for \"b1e084c06a1fabd30e228d6660191239d4fe7b255170a69be559f85b8ecbce34\" returns successfully" Dec 13 02:24:43.456208 kubelet[2243]: E1213 02:24:43.456158 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:44.456390 kubelet[2243]: E1213 02:24:44.456339 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:45.456595 kubelet[2243]: E1213 02:24:45.456544 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:24:46.456924 kubelet[2243]: E1213 02:24:46.456873 2243 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"