Dec 13 14:28:44.058288 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:28:44.058324 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:44.058340 kernel: BIOS-provided physical RAM map: Dec 13 14:28:44.058351 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:28:44.058361 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:28:44.058370 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:28:44.058386 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:28:44.058396 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:28:44.058823 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:28:44.058842 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:28:44.058852 kernel: NX (Execute Disable) protection: active Dec 13 14:28:44.058900 kernel: SMBIOS 2.7 present. Dec 13 14:28:44.058912 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:28:44.058926 kernel: Hypervisor detected: KVM Dec 13 14:28:44.058948 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:28:44.058962 kernel: kvm-clock: cpu 0, msr 4919a001, primary cpu clock Dec 13 14:28:44.058975 kernel: kvm-clock: using sched offset of 7718681204 cycles Dec 13 14:28:44.058990 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:28:44.059003 kernel: tsc: Detected 2500.000 MHz processor Dec 13 14:28:44.059017 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:28:44.059035 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:28:44.059049 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:28:44.059062 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:28:44.059073 kernel: Using GB pages for direct mapping Dec 13 14:28:44.059084 kernel: ACPI: Early table checksum verification disabled Dec 13 14:28:44.059096 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:28:44.059107 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:28:44.059176 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:28:44.059188 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:28:44.059204 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:28:44.059217 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:28:44.059229 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:28:44.059242 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:28:44.059255 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:28:44.059268 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:28:44.059280 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:28:44.059293 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:28:44.059308 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:28:44.059321 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:28:44.059335 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:28:44.059353 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:28:44.059367 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:28:44.059382 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:28:44.059396 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:28:44.059413 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:28:44.059427 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:28:44.059441 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:28:44.059455 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:28:44.059469 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:28:44.059483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:28:44.059498 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:28:44.059513 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:28:44.059529 kernel: Zone ranges: Dec 13 14:28:44.059544 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:28:44.059557 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:28:44.059625 kernel: Normal empty Dec 13 14:28:44.059894 kernel: Movable zone start for each node Dec 13 14:28:44.059910 kernel: Early memory node ranges Dec 13 14:28:44.059924 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:28:44.059963 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:28:44.059977 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:28:44.059996 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:28:44.060010 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:28:44.060024 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:28:44.060036 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:28:44.060050 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:28:44.060064 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:28:44.060078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:28:44.060093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:28:44.060108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:28:44.060127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:28:44.060142 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:28:44.060157 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:28:44.060172 kernel: TSC deadline timer available Dec 13 14:28:44.060187 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:28:44.060201 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:28:44.060216 kernel: Booting paravirtualized kernel on KVM Dec 13 14:28:44.060230 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:28:44.063693 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:28:44.063727 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:28:44.063741 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:28:44.063754 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:28:44.063767 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:28:44.063781 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:28:44.063795 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:28:44.063810 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:28:44.063824 kernel: Policy zone: DMA32 Dec 13 14:28:44.063841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:44.063859 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:28:44.063870 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:28:44.063884 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:28:44.063897 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:28:44.063908 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:28:44.063920 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:28:44.063932 kernel: Kernel/User page tables isolation: enabled Dec 13 14:28:44.063945 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:28:44.063963 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:28:44.063976 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:28:44.063990 kernel: rcu: RCU event tracing is enabled. Dec 13 14:28:44.064004 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:28:44.064020 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:28:44.064034 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:28:44.064048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:28:44.064062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:28:44.064077 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:28:44.064094 kernel: random: crng init done Dec 13 14:28:44.064108 kernel: Console: colour VGA+ 80x25 Dec 13 14:28:44.064122 kernel: printk: console [ttyS0] enabled Dec 13 14:28:44.064137 kernel: ACPI: Core revision 20210730 Dec 13 14:28:44.064151 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:28:44.064165 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:28:44.064180 kernel: x2apic enabled Dec 13 14:28:44.064195 kernel: Switched APIC routing to physical x2apic. Dec 13 14:28:44.064209 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240939f1bb2, max_idle_ns: 440795263295 ns Dec 13 14:28:44.064221 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500000) Dec 13 14:28:44.064238 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:28:44.064253 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:28:44.064269 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:28:44.064294 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:28:44.064312 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:28:44.064326 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:28:44.064342 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:28:44.064357 kernel: RETBleed: Vulnerable Dec 13 14:28:44.064371 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:28:44.064385 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:44.064400 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:28:44.064415 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:28:44.064429 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:28:44.064447 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:28:44.064461 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:28:44.064476 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:28:44.064491 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:28:44.064506 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:28:44.064521 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:28:44.064539 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:28:44.064555 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:28:44.064570 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:28:44.064601 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:28:44.064617 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:28:44.064632 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:28:44.064646 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:28:44.064671 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:28:44.064796 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:28:44.064812 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:28:44.064828 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:28:44.064845 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:28:44.064860 kernel: LSM: Security Framework initializing Dec 13 14:28:44.064874 kernel: SELinux: Initializing. Dec 13 14:28:44.064889 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:28:44.064903 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:28:44.064918 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:28:44.064933 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:28:44.064948 kernel: signal: max sigframe size: 3632 Dec 13 14:28:44.064964 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:28:44.064979 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:28:44.064994 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:28:44.065012 kernel: x86: Booting SMP configuration: Dec 13 14:28:44.065027 kernel: .... node #0, CPUs: #1 Dec 13 14:28:44.065042 kernel: kvm-clock: cpu 1, msr 4919a041, secondary cpu clock Dec 13 14:28:44.065058 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:28:44.065073 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:28:44.065089 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:28:44.065104 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:28:44.065129 kernel: smpboot: Max logical packages: 1 Dec 13 14:28:44.065148 kernel: smpboot: Total of 2 processors activated (10000.00 BogoMIPS) Dec 13 14:28:44.065162 kernel: devtmpfs: initialized Dec 13 14:28:44.065177 kernel: x86/mm: Memory block size: 128MB Dec 13 14:28:44.065192 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:28:44.065207 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:28:44.065222 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:28:44.065237 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:28:44.065252 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:28:44.065268 kernel: audit: type=2000 audit(1734100122.929:1): state=initialized audit_enabled=0 res=1 Dec 13 14:28:44.065285 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:28:44.065300 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:28:44.065315 kernel: cpuidle: using governor menu Dec 13 14:28:44.065330 kernel: ACPI: bus type PCI registered Dec 13 14:28:44.065345 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:28:44.065358 kernel: dca service started, version 1.12.1 Dec 13 14:28:44.065370 kernel: PCI: Using configuration type 1 for base access Dec 13 14:28:44.065385 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:28:44.065400 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:28:44.065417 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:28:44.065432 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:28:44.065447 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:28:44.065462 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:28:44.065477 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:28:44.065492 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:28:44.065507 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:28:44.065522 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:28:44.065537 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:28:44.065555 kernel: ACPI: Interpreter enabled Dec 13 14:28:44.065569 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:28:44.065598 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:28:44.065614 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:28:44.065629 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:28:44.065644 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:28:44.065847 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:28:44.065976 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:28:44.065999 kernel: acpiphp: Slot [3] registered Dec 13 14:28:44.066015 kernel: acpiphp: Slot [4] registered Dec 13 14:28:44.066030 kernel: acpiphp: Slot [5] registered Dec 13 14:28:44.066045 kernel: acpiphp: Slot [6] registered Dec 13 14:28:44.066060 kernel: acpiphp: Slot [7] registered Dec 13 14:28:44.066075 kernel: acpiphp: Slot [8] registered Dec 13 14:28:44.066090 kernel: acpiphp: Slot [9] registered Dec 13 14:28:44.066105 kernel: acpiphp: Slot [10] registered Dec 13 14:28:44.066119 kernel: acpiphp: Slot [11] registered Dec 13 14:28:44.066138 kernel: acpiphp: Slot [12] registered Dec 13 14:28:44.066152 kernel: acpiphp: Slot [13] registered Dec 13 14:28:44.066168 kernel: acpiphp: Slot [14] registered Dec 13 14:28:44.066182 kernel: acpiphp: Slot [15] registered Dec 13 14:28:44.066197 kernel: acpiphp: Slot [16] registered Dec 13 14:28:44.066213 kernel: acpiphp: Slot [17] registered Dec 13 14:28:44.066227 kernel: acpiphp: Slot [18] registered Dec 13 14:28:44.066243 kernel: acpiphp: Slot [19] registered Dec 13 14:28:44.066258 kernel: acpiphp: Slot [20] registered Dec 13 14:28:44.066275 kernel: acpiphp: Slot [21] registered Dec 13 14:28:44.066290 kernel: acpiphp: Slot [22] registered Dec 13 14:28:44.066305 kernel: acpiphp: Slot [23] registered Dec 13 14:28:44.066319 kernel: acpiphp: Slot [24] registered Dec 13 14:28:44.066334 kernel: acpiphp: Slot [25] registered Dec 13 14:28:44.066349 kernel: acpiphp: Slot [26] registered Dec 13 14:28:44.066364 kernel: acpiphp: Slot [27] registered Dec 13 14:28:44.066379 kernel: acpiphp: Slot [28] registered Dec 13 14:28:44.066394 kernel: acpiphp: Slot [29] registered Dec 13 14:28:44.066409 kernel: acpiphp: Slot [30] registered Dec 13 14:28:44.066426 kernel: acpiphp: Slot [31] registered Dec 13 14:28:44.066441 kernel: PCI host bridge to bus 0000:00 Dec 13 14:28:44.066571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:28:44.066700 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:28:44.066809 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:28:44.066922 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:28:44.067034 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:28:44.067183 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:28:44.067324 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:28:44.067469 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:28:44.067614 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:28:44.067741 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:28:44.067866 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:28:44.067991 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:28:44.068121 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:28:44.068248 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:28:44.068376 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:28:44.068502 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:28:44.068650 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:28:44.068778 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:28:44.068906 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:28:44.069038 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:28:44.069325 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:28:44.069485 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:28:44.069646 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:28:44.069789 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:28:44.069811 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:28:44.069834 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:28:44.069850 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:28:44.069866 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:28:44.069883 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:28:44.069905 kernel: iommu: Default domain type: Translated Dec 13 14:28:44.069922 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:28:44.070070 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:28:44.070218 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:28:44.070512 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:28:44.070545 kernel: vgaarb: loaded Dec 13 14:28:44.070561 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:28:44.070578 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:28:44.070607 kernel: PTP clock support registered Dec 13 14:28:44.070622 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:28:44.070636 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:28:44.070652 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:28:44.070667 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:28:44.070682 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:28:44.070701 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:28:44.070716 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:28:44.070731 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:28:44.070746 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:28:44.070760 kernel: pnp: PnP ACPI init Dec 13 14:28:44.070775 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:28:44.070789 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:28:44.070804 kernel: NET: Registered PF_INET protocol family Dec 13 14:28:44.070822 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:28:44.070837 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:28:44.070853 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:28:44.070868 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:28:44.070883 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:28:44.070898 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:28:44.070913 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:28:44.070928 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:28:44.070943 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:28:44.070961 kernel: NET: Registered PF_XDP protocol family Dec 13 14:28:44.071108 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:28:44.071225 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:28:44.071341 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:28:44.071645 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:28:44.071905 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:28:44.072285 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:28:44.073274 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:28:44.073294 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:28:44.073309 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240939f1bb2, max_idle_ns: 440795263295 ns Dec 13 14:28:44.073325 kernel: clocksource: Switched to clocksource tsc Dec 13 14:28:44.073337 kernel: Initialise system trusted keyrings Dec 13 14:28:44.073350 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:28:44.073579 kernel: Key type asymmetric registered Dec 13 14:28:44.073608 kernel: Asymmetric key parser 'x509' registered Dec 13 14:28:44.073622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:28:44.073641 kernel: io scheduler mq-deadline registered Dec 13 14:28:44.073657 kernel: io scheduler kyber registered Dec 13 14:28:44.073673 kernel: io scheduler bfq registered Dec 13 14:28:44.073688 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:28:44.073704 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:28:44.073720 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:28:44.073737 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:28:44.073752 kernel: i8042: Warning: Keylock active Dec 13 14:28:44.073767 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:28:44.073780 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:28:44.074398 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:28:44.074932 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:28:44.075062 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:28:43 UTC (1734100123) Dec 13 14:28:44.075180 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:28:44.075199 kernel: intel_pstate: CPU model not supported Dec 13 14:28:44.075215 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:28:44.075231 kernel: Segment Routing with IPv6 Dec 13 14:28:44.075250 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:28:44.075265 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:28:44.075280 kernel: Key type dns_resolver registered Dec 13 14:28:44.075295 kernel: IPI shorthand broadcast: enabled Dec 13 14:28:44.075310 kernel: sched_clock: Marking stable (372210946, 244241399)->(692334601, -75882256) Dec 13 14:28:44.075325 kernel: registered taskstats version 1 Dec 13 14:28:44.075340 kernel: Loading compiled-in X.509 certificates Dec 13 14:28:44.075355 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:28:44.075369 kernel: Key type .fscrypt registered Dec 13 14:28:44.075387 kernel: Key type fscrypt-provisioning registered Dec 13 14:28:44.075402 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:28:44.075417 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:28:44.075431 kernel: ima: No architecture policies found Dec 13 14:28:44.075446 kernel: clk: Disabling unused clocks Dec 13 14:28:44.075461 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:28:44.075476 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:28:44.075491 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:28:44.075506 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:28:44.075524 kernel: Run /init as init process Dec 13 14:28:44.075539 kernel: with arguments: Dec 13 14:28:44.075554 kernel: /init Dec 13 14:28:44.075568 kernel: with environment: Dec 13 14:28:44.075583 kernel: HOME=/ Dec 13 14:28:44.076048 kernel: TERM=linux Dec 13 14:28:44.076069 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:28:44.076088 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:44.076112 systemd[1]: Detected virtualization amazon. Dec 13 14:28:44.076129 systemd[1]: Detected architecture x86-64. Dec 13 14:28:44.076144 systemd[1]: Running in initrd. Dec 13 14:28:44.076160 systemd[1]: No hostname configured, using default hostname. Dec 13 14:28:44.076192 systemd[1]: Hostname set to . Dec 13 14:28:44.076211 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:28:44.076231 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:28:44.076247 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:44.076263 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:44.076278 systemd[1]: Reached target paths.target. Dec 13 14:28:44.076293 systemd[1]: Reached target slices.target. Dec 13 14:28:44.076309 systemd[1]: Reached target swap.target. Dec 13 14:28:44.076325 systemd[1]: Reached target timers.target. Dec 13 14:28:44.076650 systemd[1]: Listening on iscsid.socket. Dec 13 14:28:44.076933 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:28:44.076953 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:44.076969 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:44.076985 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:44.077001 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:44.078157 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:44.078746 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:44.078772 systemd[1]: Reached target sockets.target. Dec 13 14:28:44.078838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:28:44.078855 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:44.078872 systemd[1]: Finished network-cleanup.service. Dec 13 14:28:44.078886 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:28:44.078901 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:44.079315 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:44.079389 systemd[1]: Starting systemd-resolved.service... Dec 13 14:28:44.079414 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:28:44.079565 systemd-journald[185]: Journal started Dec 13 14:28:44.080007 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2974623fb15aa66b50f6835009a67e) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:28:44.085619 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:44.097705 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:28:44.107257 systemd[1]: Started systemd-journald.service. Dec 13 14:28:44.108035 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:28:44.111830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:44.335149 kernel: audit: type=1130 audit(1734100124.101:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.335270 kernel: audit: type=1130 audit(1734100124.105:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.335294 kernel: audit: type=1130 audit(1734100124.107:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.335310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:28:44.335332 kernel: Bridge firewalling registered Dec 13 14:28:44.335352 kernel: SCSI subsystem initialized Dec 13 14:28:44.335369 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:28:44.335386 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:28:44.335402 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:28:44.335419 kernel: audit: type=1130 audit(1734100124.333:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.179200 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:28:44.208545 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:28:44.208556 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:28:44.347100 kernel: audit: type=1130 audit(1734100124.340:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.208623 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:28:44.226161 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:28:44.232446 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:28:44.371330 kernel: audit: type=1130 audit(1734100124.353:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.333924 systemd[1]: Started systemd-resolved.service. Dec 13 14:28:44.340693 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:44.347385 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:28:44.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.372352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:44.392945 kernel: audit: type=1130 audit(1734100124.378:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.392510 systemd[1]: Reached target nss-lookup.target. Dec 13 14:28:44.397600 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:28:44.400973 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:44.414639 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:44.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.419607 kernel: audit: type=1130 audit(1734100124.413:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.423673 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:28:44.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.426837 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:28:44.434515 kernel: audit: type=1130 audit(1734100124.423:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.443627 dracut-cmdline[207]: dracut-dracut-053 Dec 13 14:28:44.446079 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:28:44.514615 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:28:44.538621 kernel: iscsi: registered transport (tcp) Dec 13 14:28:44.565766 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:28:44.565843 kernel: QLogic iSCSI HBA Driver Dec 13 14:28:44.600363 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:28:44.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:44.602497 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:28:44.659625 kernel: raid6: avx512x4 gen() 15738 MB/s Dec 13 14:28:44.676613 kernel: raid6: avx512x4 xor() 6925 MB/s Dec 13 14:28:44.693615 kernel: raid6: avx512x2 gen() 13249 MB/s Dec 13 14:28:44.711614 kernel: raid6: avx512x2 xor() 22869 MB/s Dec 13 14:28:44.728613 kernel: raid6: avx512x1 gen() 13715 MB/s Dec 13 14:28:44.746617 kernel: raid6: avx512x1 xor() 20824 MB/s Dec 13 14:28:44.764614 kernel: raid6: avx2x4 gen() 15222 MB/s Dec 13 14:28:44.781613 kernel: raid6: avx2x4 xor() 6211 MB/s Dec 13 14:28:44.798613 kernel: raid6: avx2x2 gen() 13432 MB/s Dec 13 14:28:44.815619 kernel: raid6: avx2x2 xor() 16312 MB/s Dec 13 14:28:44.832615 kernel: raid6: avx2x1 gen() 12795 MB/s Dec 13 14:28:44.849607 kernel: raid6: avx2x1 xor() 15451 MB/s Dec 13 14:28:44.866614 kernel: raid6: sse2x4 gen() 8701 MB/s Dec 13 14:28:44.883622 kernel: raid6: sse2x4 xor() 5606 MB/s Dec 13 14:28:44.900613 kernel: raid6: sse2x2 gen() 10129 MB/s Dec 13 14:28:44.917616 kernel: raid6: sse2x2 xor() 5775 MB/s Dec 13 14:28:44.934615 kernel: raid6: sse2x1 gen() 8830 MB/s Dec 13 14:28:44.952508 kernel: raid6: sse2x1 xor() 3832 MB/s Dec 13 14:28:44.952584 kernel: raid6: using algorithm avx512x4 gen() 15738 MB/s Dec 13 14:28:44.952612 kernel: raid6: .... xor() 6925 MB/s, rmw enabled Dec 13 14:28:44.953310 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:28:44.968614 kernel: xor: automatically using best checksumming function avx Dec 13 14:28:45.118621 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:28:45.128352 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:28:45.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.128000 audit: BPF prog-id=7 op=LOAD Dec 13 14:28:45.128000 audit: BPF prog-id=8 op=LOAD Dec 13 14:28:45.130312 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:45.146769 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:28:45.152699 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:45.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.154800 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:28:45.192449 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Dec 13 14:28:45.248921 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:28:45.251965 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:45.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.351448 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:45.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:45.456621 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:28:45.474246 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:28:45.474420 kernel: AES CTR mode by8 optimization enabled Dec 13 14:28:45.481680 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:28:45.494283 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:28:45.494433 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:28:45.494562 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:0d:64:a8:3a:8d Dec 13 14:28:45.496666 (udev-worker)[443]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:28:45.638113 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:28:45.638634 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:28:45.638666 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:28:45.638844 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:28:45.638864 kernel: GPT:9289727 != 16777215 Dec 13 14:28:45.638881 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:28:45.639149 kernel: GPT:9289727 != 16777215 Dec 13 14:28:45.639172 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:28:45.639191 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:28:45.639208 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435) Dec 13 14:28:45.651196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:28:45.701250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:28:45.729381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:28:45.739471 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:28:45.742050 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:28:45.780964 systemd[1]: Starting disk-uuid.service... Dec 13 14:28:45.797979 disk-uuid[593]: Primary Header is updated. Dec 13 14:28:45.797979 disk-uuid[593]: Secondary Entries is updated. Dec 13 14:28:45.797979 disk-uuid[593]: Secondary Header is updated. Dec 13 14:28:45.804646 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:28:45.817608 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:28:45.825629 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:28:46.824570 disk-uuid[594]: The operation has completed successfully. Dec 13 14:28:46.827473 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:28:46.972899 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:28:46.973163 systemd[1]: Finished disk-uuid.service. Dec 13 14:28:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:46.992844 systemd[1]: Starting verity-setup.service... Dec 13 14:28:47.040620 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:28:47.180393 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:28:47.189058 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:28:47.198104 systemd[1]: Finished verity-setup.service. Dec 13 14:28:47.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.392611 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:28:47.395730 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:28:47.399096 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:28:47.401799 systemd[1]: Starting ignition-setup.service... Dec 13 14:28:47.407850 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:28:47.444519 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:47.444614 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:28:47.444636 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:28:47.471613 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:28:47.496314 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:28:47.532665 systemd[1]: Finished ignition-setup.service. Dec 13 14:28:47.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.536070 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:28:47.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.554030 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:28:47.558000 audit: BPF prog-id=9 op=LOAD Dec 13 14:28:47.561350 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:47.604246 systemd-networkd[1106]: lo: Link UP Dec 13 14:28:47.604259 systemd-networkd[1106]: lo: Gained carrier Dec 13 14:28:47.606980 systemd-networkd[1106]: Enumeration completed Dec 13 14:28:47.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.607121 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:47.614179 systemd[1]: Reached target network.target. Dec 13 14:28:47.619460 systemd[1]: Starting iscsiuio.service... Dec 13 14:28:47.619727 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:47.632538 systemd[1]: Started iscsiuio.service. Dec 13 14:28:47.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.634645 systemd[1]: Starting iscsid.service... Dec 13 14:28:47.637306 systemd-networkd[1106]: eth0: Link UP Dec 13 14:28:47.637313 systemd-networkd[1106]: eth0: Gained carrier Dec 13 14:28:47.643378 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:47.643378 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:28:47.643378 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:28:47.643378 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:28:47.643378 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:28:47.643378 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:28:47.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.657244 systemd[1]: Started iscsid.service. Dec 13 14:28:47.658727 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.20.184/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:28:47.662682 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:28:47.693359 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:28:47.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:47.699769 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:28:47.705313 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:47.713525 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:47.718683 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:28:47.753274 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:28:47.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.144440 ignition[1102]: Ignition 2.14.0 Dec 13 14:28:48.144456 ignition[1102]: Stage: fetch-offline Dec 13 14:28:48.144627 ignition[1102]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:48.144692 ignition[1102]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:48.162476 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:48.164224 ignition[1102]: Ignition finished successfully Dec 13 14:28:48.166501 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:28:48.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.175469 systemd[1]: Starting ignition-fetch.service... Dec 13 14:28:48.187742 ignition[1130]: Ignition 2.14.0 Dec 13 14:28:48.187755 ignition[1130]: Stage: fetch Dec 13 14:28:48.188127 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:48.188166 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:48.199177 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:48.201204 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:48.232865 ignition[1130]: INFO : PUT result: OK Dec 13 14:28:48.237333 ignition[1130]: DEBUG : parsed url from cmdline: "" Dec 13 14:28:48.237333 ignition[1130]: INFO : no config URL provided Dec 13 14:28:48.237333 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:28:48.237333 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:28:48.242656 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:48.242656 ignition[1130]: INFO : PUT result: OK Dec 13 14:28:48.242656 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:28:48.246826 ignition[1130]: INFO : GET result: OK Dec 13 14:28:48.246826 ignition[1130]: DEBUG : parsing config with SHA512: a2c9c5fcdc5009071764077325028f8dc3f573b5a72a995263f5f279be49dc9fc2c907262c09be1677ef2e5dae8736bf447f145247e888191da643a1d20cd422 Dec 13 14:28:48.248645 unknown[1130]: fetched base config from "system" Dec 13 14:28:48.250117 ignition[1130]: fetch: fetch complete Dec 13 14:28:48.248655 unknown[1130]: fetched base config from "system" Dec 13 14:28:48.261415 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 14:28:48.261455 kernel: audit: type=1130 audit(1734100128.252:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.250125 ignition[1130]: fetch: fetch passed Dec 13 14:28:48.248662 unknown[1130]: fetched user config from "aws" Dec 13 14:28:48.250189 ignition[1130]: Ignition finished successfully Dec 13 14:28:48.253212 systemd[1]: Finished ignition-fetch.service. Dec 13 14:28:48.265579 systemd[1]: Starting ignition-kargs.service... Dec 13 14:28:48.277208 ignition[1136]: Ignition 2.14.0 Dec 13 14:28:48.277222 ignition[1136]: Stage: kargs Dec 13 14:28:48.277433 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:48.277465 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:48.288168 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:48.289655 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:48.292285 ignition[1136]: INFO : PUT result: OK Dec 13 14:28:48.297359 ignition[1136]: kargs: kargs passed Dec 13 14:28:48.297446 ignition[1136]: Ignition finished successfully Dec 13 14:28:48.299525 systemd[1]: Finished ignition-kargs.service. Dec 13 14:28:48.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.304396 systemd[1]: Starting ignition-disks.service... Dec 13 14:28:48.309668 kernel: audit: type=1130 audit(1734100128.300:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.316179 ignition[1142]: Ignition 2.14.0 Dec 13 14:28:48.316191 ignition[1142]: Stage: disks Dec 13 14:28:48.316557 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:48.316677 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:48.325277 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:48.328242 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:48.336668 ignition[1142]: INFO : PUT result: OK Dec 13 14:28:48.340158 ignition[1142]: disks: disks passed Dec 13 14:28:48.340230 ignition[1142]: Ignition finished successfully Dec 13 14:28:48.345687 systemd[1]: Finished ignition-disks.service. Dec 13 14:28:48.359288 kernel: audit: type=1130 audit(1734100128.345:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.346973 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:28:48.359356 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:48.361559 systemd[1]: Reached target local-fs.target. Dec 13 14:28:48.367821 systemd[1]: Reached target sysinit.target. Dec 13 14:28:48.374247 systemd[1]: Reached target basic.target. Dec 13 14:28:48.385683 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:28:48.482221 systemd-fsck[1150]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:28:48.487200 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:28:48.500641 kernel: audit: type=1130 audit(1734100128.493:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.500735 systemd[1]: Mounting sysroot.mount... Dec 13 14:28:48.524759 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:28:48.527839 systemd[1]: Mounted sysroot.mount. Dec 13 14:28:48.530490 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:28:48.538204 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:28:48.540398 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:28:48.540463 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:28:48.540568 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:28:48.546868 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:28:48.575432 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:48.578531 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:28:48.603083 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:28:48.605605 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Dec 13 14:28:48.610024 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:48.610084 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:28:48.610096 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:28:48.615613 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:28:48.620010 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:48.626323 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:28:48.632682 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:28:48.639208 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:28:48.844929 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:28:48.858642 kernel: audit: type=1130 audit(1734100128.845:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.849182 systemd[1]: Starting ignition-mount.service... Dec 13 14:28:48.861897 systemd[1]: Starting sysroot-boot.service... Dec 13 14:28:48.880726 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:48.882407 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:48.909275 ignition[1233]: INFO : Ignition 2.14.0 Dec 13 14:28:48.910727 ignition[1233]: INFO : Stage: mount Dec 13 14:28:48.913557 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:48.915656 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:48.922067 systemd[1]: Finished sysroot-boot.service. Dec 13 14:28:48.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.927629 kernel: audit: type=1130 audit(1734100128.921:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.932123 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:48.933683 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:48.935208 ignition[1233]: INFO : PUT result: OK Dec 13 14:28:48.939118 ignition[1233]: INFO : mount: mount passed Dec 13 14:28:48.940831 ignition[1233]: INFO : Ignition finished successfully Dec 13 14:28:48.941894 systemd[1]: Finished ignition-mount.service. Dec 13 14:28:48.944033 systemd[1]: Starting ignition-files.service... Dec 13 14:28:48.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.949704 kernel: audit: type=1130 audit(1734100128.941:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:48.960037 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:28:48.971624 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) Dec 13 14:28:48.980664 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:28:48.980726 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:28:48.980738 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:28:48.988169 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:28:48.992304 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:28:49.009441 ignition[1262]: INFO : Ignition 2.14.0 Dec 13 14:28:49.009441 ignition[1262]: INFO : Stage: files Dec 13 14:28:49.011845 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:49.011845 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:49.021600 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:49.024677 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:49.026373 ignition[1262]: INFO : PUT result: OK Dec 13 14:28:49.030169 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:28:49.036292 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:28:49.036292 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:28:49.068068 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:28:49.071963 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:28:49.071963 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:28:49.069944 unknown[1262]: wrote ssh authorized keys file for user: core Dec 13 14:28:49.083688 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:28:49.083688 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:28:49.083688 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:28:49.083688 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:49.095256 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1262) Dec 13 14:28:49.095342 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1309784931" Dec 13 14:28:49.095342 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1309784931": device or resource busy Dec 13 14:28:49.095342 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1309784931", trying btrfs: device or resource busy Dec 13 14:28:49.095342 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1309784931" Dec 13 14:28:49.095342 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1309784931" Dec 13 14:28:49.095342 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem1309784931" Dec 13 14:28:49.095342 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem1309784931" Dec 13 14:28:49.095342 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:28:49.095342 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:28:49.114360 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:49.114360 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1028146315" Dec 13 14:28:49.114360 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1028146315": device or resource busy Dec 13 14:28:49.114360 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1028146315", trying btrfs: device or resource busy Dec 13 14:28:49.114360 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1028146315" Dec 13 14:28:49.114360 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1028146315" Dec 13 14:28:49.114360 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem1028146315" Dec 13 14:28:49.114360 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem1028146315" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:28:49.114360 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:28:49.114360 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:49.154680 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016365331" Dec 13 14:28:49.154680 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016365331": device or resource busy Dec 13 14:28:49.154680 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1016365331", trying btrfs: device or resource busy Dec 13 14:28:49.154680 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016365331" Dec 13 14:28:49.154680 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016365331" Dec 13 14:28:49.154680 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem1016365331" Dec 13 14:28:49.154680 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem1016365331" Dec 13 14:28:49.154680 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:28:49.154680 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:28:49.154680 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:28:49.154680 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2914688334" Dec 13 14:28:49.154680 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2914688334": device or resource busy Dec 13 14:28:49.154680 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2914688334", trying btrfs: device or resource busy Dec 13 14:28:49.154680 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2914688334" Dec 13 14:28:49.154680 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2914688334" Dec 13 14:28:49.154680 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem2914688334" Dec 13 14:28:49.154680 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem2914688334" Dec 13 14:28:49.154680 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:28:49.154680 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:49.154680 ignition[1262]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:28:49.221822 systemd-networkd[1106]: eth0: Gained IPv6LL Dec 13 14:28:49.683153 ignition[1262]: INFO : GET result: OK Dec 13 14:28:50.125416 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:28:50.125416 ignition[1262]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:28:50.125416 ignition[1262]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:28:50.125416 ignition[1262]: INFO : files: op(d): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(d): op(e): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(d): op(e): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(d): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(f): [started] processing unit "nvidia.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(f): [finished] processing unit "nvidia.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(10): [started] processing unit "containerd.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(10): op(11): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(10): [finished] processing unit "containerd.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(13): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(13): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(14): [started] setting preset to enabled for "nvidia.service" Dec 13 14:28:50.132857 ignition[1262]: INFO : files: op(14): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:28:50.181317 kernel: audit: type=1130 audit(1734100130.140:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.181353 kernel: audit: type=1130 audit(1734100130.164:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.181373 kernel: audit: type=1131 audit(1734100130.168:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.140993 systemd[1]: Finished ignition-files.service. Dec 13 14:28:50.183169 ignition[1262]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:50.183169 ignition[1262]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:28:50.183169 ignition[1262]: INFO : files: files passed Dec 13 14:28:50.183169 ignition[1262]: INFO : Ignition finished successfully Dec 13 14:28:50.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.152079 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:28:50.155758 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:28:50.195670 initrd-setup-root-after-ignition[1287]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:28:50.157098 systemd[1]: Starting ignition-quench.service... Dec 13 14:28:50.164413 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:28:50.164543 systemd[1]: Finished ignition-quench.service. Dec 13 14:28:50.183477 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:28:50.186569 systemd[1]: Reached target ignition-complete.target. Dec 13 14:28:50.190620 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:28:50.216336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:28:50.216428 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:28:50.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.219182 systemd[1]: Reached target initrd-fs.target. Dec 13 14:28:50.221208 systemd[1]: Reached target initrd.target. Dec 13 14:28:50.224573 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:28:50.229019 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:28:50.242233 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:28:50.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.245353 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:28:50.272315 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:28:50.274359 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:28:50.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.276945 systemd[1]: Stopped target timers.target. Dec 13 14:28:50.278805 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:28:50.279655 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:28:50.284249 systemd[1]: Stopped target initrd.target. Dec 13 14:28:50.288310 systemd[1]: Stopped target basic.target. Dec 13 14:28:50.290755 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:28:50.296400 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:28:50.299469 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:28:50.302276 systemd[1]: Stopped target remote-fs.target. Dec 13 14:28:50.306354 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:28:50.308770 systemd[1]: Stopped target sysinit.target. Dec 13 14:28:50.313046 systemd[1]: Stopped target local-fs.target. Dec 13 14:28:50.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.316049 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:28:50.318700 systemd[1]: Stopped target swap.target. Dec 13 14:28:50.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.327952 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:28:50.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.328405 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:28:50.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.331150 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:28:50.334176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:28:50.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.370739 iscsid[1111]: iscsid shutting down. Dec 13 14:28:50.334378 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:28:50.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.380827 ignition[1300]: INFO : Ignition 2.14.0 Dec 13 14:28:50.380827 ignition[1300]: INFO : Stage: umount Dec 13 14:28:50.380827 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:28:50.380827 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:28:50.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.336651 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:28:50.395828 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:28:50.395828 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:28:50.395828 ignition[1300]: INFO : PUT result: OK Dec 13 14:28:50.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.336849 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:28:50.338658 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:28:50.338829 systemd[1]: Stopped ignition-files.service. Dec 13 14:28:50.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.411090 ignition[1300]: INFO : umount: umount passed Dec 13 14:28:50.411090 ignition[1300]: INFO : Ignition finished successfully Dec 13 14:28:50.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.342220 systemd[1]: Stopping ignition-mount.service... Dec 13 14:28:50.350736 systemd[1]: Stopping iscsid.service... Dec 13 14:28:50.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.351456 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:28:50.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.351857 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:28:50.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.354716 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:28:50.365561 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:28:50.366151 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:28:50.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.374973 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:28:50.375328 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:28:50.380730 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:28:50.380871 systemd[1]: Stopped iscsid.service. Dec 13 14:28:50.388052 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:28:50.388462 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:28:50.398252 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:28:50.404822 systemd[1]: Stopping iscsiuio.service... Dec 13 14:28:50.407936 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:28:50.408158 systemd[1]: Stopped iscsiuio.service. Dec 13 14:28:50.409519 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:28:50.410943 systemd[1]: Stopped ignition-mount.service. Dec 13 14:28:50.414565 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:28:50.414656 systemd[1]: Stopped ignition-disks.service. Dec 13 14:28:50.418070 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:28:50.418149 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:28:50.420834 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:28:50.420899 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:28:50.423835 systemd[1]: Stopped target network.target. Dec 13 14:28:50.427924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:28:50.428403 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:28:50.430769 systemd[1]: Stopped target paths.target. Dec 13 14:28:50.434044 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:28:50.437912 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:28:50.440423 systemd[1]: Stopped target slices.target. Dec 13 14:28:50.444524 systemd[1]: Stopped target sockets.target. Dec 13 14:28:50.446645 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:28:50.446712 systemd[1]: Closed iscsid.socket. Dec 13 14:28:50.473912 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:28:50.473964 systemd[1]: Closed iscsiuio.socket. Dec 13 14:28:50.480833 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:28:50.481792 systemd[1]: Stopped ignition-setup.service. Dec 13 14:28:50.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.485254 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:28:50.487651 systemd-networkd[1106]: eth0: DHCPv6 lease lost Dec 13 14:28:50.487740 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:28:50.489358 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:28:50.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.496000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:28:50.490879 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:28:50.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.495332 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:28:50.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.495403 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:28:50.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.499796 systemd[1]: Stopping network-cleanup.service... Dec 13 14:28:50.500784 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:28:50.500853 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:28:50.503684 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:28:50.504188 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:28:50.508104 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:28:50.508230 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:28:50.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.510850 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:28:50.525312 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:28:50.526168 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:28:50.526309 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:28:50.538150 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:28:50.538510 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:28:50.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.545000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:28:50.546788 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:28:50.547925 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:28:50.550172 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:28:50.551378 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:28:50.554092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:28:50.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.554144 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:28:50.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.555390 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:28:50.555428 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:28:50.560280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:28:50.560331 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:28:50.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.568159 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:28:50.570387 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:28:50.570448 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:28:50.572404 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:28:50.572507 systemd[1]: Stopped network-cleanup.service. Dec 13 14:28:50.579527 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:28:50.579637 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:28:50.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.744340 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:28:50.744470 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:28:50.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.750531 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:28:50.753739 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:28:50.753833 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:28:50.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:50.760760 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:28:50.782993 systemd[1]: Switching root. Dec 13 14:28:50.787000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:28:50.787000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:28:50.787000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:28:50.788000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:28:50.788000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:28:50.806652 systemd-journald[185]: Journal stopped Dec 13 14:28:57.458078 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 14:28:57.458162 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:28:57.458186 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:28:57.458203 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:28:57.458218 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:28:57.458234 kernel: SELinux: policy capability open_perms=1 Dec 13 14:28:57.458250 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:28:57.458271 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:28:57.458291 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:28:57.458309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:28:57.458324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:28:57.458341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:28:57.458358 systemd[1]: Successfully loaded SELinux policy in 98.830ms. Dec 13 14:28:57.458390 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.808ms. Dec 13 14:28:57.458410 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:28:57.458437 systemd[1]: Detected virtualization amazon. Dec 13 14:28:57.458455 systemd[1]: Detected architecture x86-64. Dec 13 14:28:57.458474 systemd[1]: Detected first boot. Dec 13 14:28:57.458489 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:28:57.458513 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:28:57.458531 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:28:57.458550 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:57.458569 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:57.458610 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:57.458633 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:28:57.458650 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:28:57.458668 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:28:57.458702 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:28:57.458721 systemd[1]: Created slice system-getty.slice. Dec 13 14:28:57.458738 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:28:57.458756 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:28:57.458775 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:28:57.458799 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:28:57.458819 systemd[1]: Created slice user.slice. Dec 13 14:28:57.458837 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:28:57.458857 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:28:57.458877 systemd[1]: Set up automount boot.automount. Dec 13 14:28:57.458897 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:28:57.458916 systemd[1]: Reached target integritysetup.target. Dec 13 14:28:57.458937 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:28:57.458961 systemd[1]: Reached target remote-fs.target. Dec 13 14:28:57.458981 systemd[1]: Reached target slices.target. Dec 13 14:28:57.458998 systemd[1]: Reached target swap.target. Dec 13 14:28:57.459016 systemd[1]: Reached target torcx.target. Dec 13 14:28:57.459034 systemd[1]: Reached target veritysetup.target. Dec 13 14:28:57.459051 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:28:57.459069 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:28:57.459086 kernel: kauditd_printk_skb: 55 callbacks suppressed Dec 13 14:28:57.459105 kernel: audit: type=1400 audit(1734100137.166:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:57.459126 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:28:57.459144 kernel: audit: type=1335 audit(1734100137.166:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:28:57.459161 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:28:57.459178 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:28:57.459196 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:28:57.459214 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:28:57.459231 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:28:57.459249 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:28:57.459269 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:28:57.459289 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:28:57.459306 systemd[1]: Mounting media.mount... Dec 13 14:28:57.459324 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:57.459343 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:28:57.459361 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:28:57.459378 systemd[1]: Mounting tmp.mount... Dec 13 14:28:57.459395 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:28:57.459413 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:57.459434 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:28:57.459451 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:28:57.459469 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:57.459487 systemd[1]: Starting modprobe@drm.service... Dec 13 14:28:57.459505 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:57.459522 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:28:57.459539 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:57.459557 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:28:57.459580 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:28:57.459618 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:28:57.459636 systemd[1]: Starting systemd-journald.service... Dec 13 14:28:57.459654 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:28:57.459672 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:28:57.459688 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:28:57.459706 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:28:57.459723 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:57.459741 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:28:57.459758 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:28:57.459779 systemd[1]: Mounted media.mount. Dec 13 14:28:57.459798 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:28:57.459816 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:28:57.459833 systemd[1]: Mounted tmp.mount. Dec 13 14:28:57.459852 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:28:57.459869 kernel: loop: module loaded Dec 13 14:28:57.459887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:57.459904 kernel: audit: type=1130 audit(1734100137.397:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.459925 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:57.462679 kernel: audit: type=1130 audit(1734100137.405:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462721 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:28:57.462740 kernel: audit: type=1131 audit(1734100137.405:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462761 systemd[1]: Finished modprobe@drm.service. Dec 13 14:28:57.462779 kernel: audit: type=1130 audit(1734100137.419:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:57.462813 kernel: audit: type=1131 audit(1734100137.419:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462831 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:57.462852 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:57.462870 kernel: audit: type=1130 audit(1734100137.429:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462888 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:57.462909 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:28:57.462927 kernel: audit: type=1131 audit(1734100137.429:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462945 kernel: audit: type=1130 audit(1734100137.436:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.462962 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:28:57.462980 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:28:57.462997 systemd[1]: Reached target network-pre.target. Dec 13 14:28:57.463025 systemd-journald[1445]: Journal started Dec 13 14:28:57.463100 systemd-journald[1445]: Runtime Journal (/run/log/journal/ec2974623fb15aa66b50f6835009a67e) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:28:57.166000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:28:57.166000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:28:57.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.454000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:28:57.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.454000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc736bc370 a2=4000 a3=7ffc736bc40c items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:57.454000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:28:57.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.467667 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:28:57.476616 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:28:57.476699 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:57.497392 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:28:57.497526 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:57.497555 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:57.497577 systemd[1]: Started systemd-journald.service. Dec 13 14:28:57.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.497097 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:28:57.497352 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:28:57.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.502735 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:28:57.506002 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:28:57.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.520257 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:28:57.521767 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:28:57.522852 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:28:57.539627 kernel: fuse: init (API version 7.34) Dec 13 14:28:57.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.540837 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:28:57.541099 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:28:57.544375 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:28:57.551419 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:28:57.559722 systemd-journald[1445]: Time spent on flushing to /var/log/journal/ec2974623fb15aa66b50f6835009a67e is 91.311ms for 1115 entries. Dec 13 14:28:57.559722 systemd-journald[1445]: System Journal (/var/log/journal/ec2974623fb15aa66b50f6835009a67e) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:28:57.667728 systemd-journald[1445]: Received client request to flush runtime journal. Dec 13 14:28:57.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.574771 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:57.669950 udevadm[1492]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:28:57.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.617743 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:28:57.620747 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:28:57.653806 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:28:57.656783 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:28:57.669069 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:28:57.777698 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:28:57.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:57.781791 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:28:57.896795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:28:57.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.591513 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:28:58.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.596319 systemd[1]: Starting systemd-udevd.service... Dec 13 14:28:58.627527 systemd-udevd[1504]: Using default interface naming scheme 'v252'. Dec 13 14:28:58.670490 systemd[1]: Started systemd-udevd.service. Dec 13 14:28:58.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.674008 systemd[1]: Starting systemd-networkd.service... Dec 13 14:28:58.687173 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:28:58.756866 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:28:58.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.785677 systemd[1]: Started systemd-userdbd.service. Dec 13 14:28:58.812048 (udev-worker)[1516]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:28:58.860614 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:28:58.868955 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:28:58.869078 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:28:58.870234 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:28:58.916000 audit[1507]: AVC avc: denied { confidentiality } for pid=1507 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:28:58.916000 audit[1507]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5602c1794270 a1=337fc a2=7f79468a7bc5 a3=5 items=110 ppid=1504 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:28:58.916000 audit: CWD cwd="/" Dec 13 14:28:58.916000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=1 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=2 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=3 name=(null) inode=14597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=4 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=5 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=6 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=7 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=8 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=9 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=10 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=11 name=(null) inode=14601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=12 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=13 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=14 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=15 name=(null) inode=14603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=16 name=(null) inode=14599 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=17 name=(null) inode=14604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=18 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=19 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=20 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=21 name=(null) inode=14606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=22 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=23 name=(null) inode=14607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=24 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=25 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=26 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=27 name=(null) inode=14609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=28 name=(null) inode=14605 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=29 name=(null) inode=14610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=30 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=31 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=32 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=33 name=(null) inode=14612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=34 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=35 name=(null) inode=14613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=36 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=37 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=38 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=39 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=40 name=(null) inode=14611 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=41 name=(null) inode=14616 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=42 name=(null) inode=14596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=43 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=44 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=45 name=(null) inode=14618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=46 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=47 name=(null) inode=14619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=48 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=49 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=50 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=51 name=(null) inode=14621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=52 name=(null) inode=14617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=53 name=(null) inode=14622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=55 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=56 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=57 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=58 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=59 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=60 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=61 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=62 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=63 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=64 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=65 name=(null) inode=14628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=66 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=67 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=68 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=69 name=(null) inode=14630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=70 name=(null) inode=14626 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=71 name=(null) inode=14631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=72 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=73 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=74 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=75 name=(null) inode=14633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=76 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=77 name=(null) inode=14634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=78 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=79 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=80 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=81 name=(null) inode=14636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=82 name=(null) inode=14632 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=83 name=(null) inode=14637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=84 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=85 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=86 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=87 name=(null) inode=14639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=88 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=89 name=(null) inode=14640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=90 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=91 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=92 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=93 name=(null) inode=14642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=94 name=(null) inode=14638 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=95 name=(null) inode=14643 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=96 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=97 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=98 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:58.963507 systemd-networkd[1510]: lo: Link UP Dec 13 14:28:58.963513 systemd-networkd[1510]: lo: Gained carrier Dec 13 14:28:58.964136 systemd-networkd[1510]: Enumeration completed Dec 13 14:28:58.964299 systemd[1]: Started systemd-networkd.service. Dec 13 14:28:58.964991 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:28:58.967144 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:28:58.916000 audit: PATH item=99 name=(null) inode=14645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=100 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=101 name=(null) inode=14646 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=102 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=103 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=104 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=105 name=(null) inode=14648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=106 name=(null) inode=14644 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=107 name=(null) inode=14649 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PATH item=109 name=(null) inode=14650 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:28:58.916000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:28:58.980327 systemd-networkd[1510]: eth0: Link UP Dec 13 14:28:58.980544 systemd-networkd[1510]: eth0: Gained carrier Dec 13 14:28:58.980633 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:58.994667 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:28:58.995796 systemd-networkd[1510]: eth0: DHCPv4 address 172.31.20.184/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:28:59.044624 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:28:59.046612 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1508) Dec 13 14:28:59.071613 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:28:59.208564 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:28:59.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.275052 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:28:59.279271 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:28:59.367030 lvm[1619]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:28:59.414554 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:28:59.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.415951 systemd[1]: Reached target cryptsetup.target. Dec 13 14:28:59.425284 systemd[1]: Starting lvm2-activation.service... Dec 13 14:28:59.434759 lvm[1621]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:28:59.462337 systemd[1]: Finished lvm2-activation.service. Dec 13 14:28:59.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.463500 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:28:59.464744 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:28:59.464842 systemd[1]: Reached target local-fs.target. Dec 13 14:28:59.465912 systemd[1]: Reached target machines.target. Dec 13 14:28:59.469486 systemd[1]: Starting ldconfig.service... Dec 13 14:28:59.471691 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:59.471772 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:59.476964 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:28:59.480675 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:28:59.483640 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:28:59.488696 systemd[1]: Starting systemd-sysext.service... Dec 13 14:28:59.506085 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1624 (bootctl) Dec 13 14:28:59.508288 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:28:59.523140 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:28:59.530212 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:28:59.530793 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:28:59.570614 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:28:59.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.590821 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:28:59.751607 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:28:59.755659 systemd-fsck[1637]: fsck.fat 4.2 (2021-01-31) Dec 13 14:28:59.755659 systemd-fsck[1637]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:28:59.759957 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:28:59.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.763213 systemd[1]: Mounting boot.mount... Dec 13 14:28:59.785104 systemd[1]: Mounted boot.mount. Dec 13 14:28:59.787080 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:28:59.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.813407 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:28:59.820306 (sd-sysext)[1645]: Using extensions 'kubernetes'. Dec 13 14:28:59.821099 (sd-sysext)[1645]: Merged extensions into '/usr'. Dec 13 14:28:59.850441 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:59.852988 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:28:59.854621 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:28:59.856738 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:28:59.859547 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:28:59.862639 systemd[1]: Starting modprobe@loop.service... Dec 13 14:28:59.863896 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:28:59.864094 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:28:59.864293 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:28:59.865938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:28:59.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.866187 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:28:59.891249 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:28:59.894556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:28:59.895095 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:28:59.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.901049 systemd[1]: Finished systemd-sysext.service. Dec 13 14:28:59.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.902899 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:28:59.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:28:59.904364 systemd[1]: Finished modprobe@loop.service. Dec 13 14:28:59.914684 systemd[1]: Starting ensure-sysext.service... Dec 13 14:28:59.915687 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:28:59.915896 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:28:59.917993 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:28:59.981679 systemd[1]: Reloading. Dec 13 14:28:59.981850 systemd-tmpfiles[1672]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:29:00.000301 systemd-tmpfiles[1672]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:29:00.053557 systemd-tmpfiles[1672]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:29:00.316192 /usr/lib/systemd/system-generators/torcx-generator[1692]: time="2024-12-13T14:29:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:00.338660 /usr/lib/systemd/system-generators/torcx-generator[1692]: time="2024-12-13T14:29:00Z" level=info msg="torcx already run" Dec 13 14:29:00.487970 systemd-networkd[1510]: eth0: Gained IPv6LL Dec 13 14:29:00.547012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:00.547034 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:00.575813 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:00.670101 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:29:00.684872 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:29:00.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.686653 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:29:00.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.689928 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:29:00.698736 systemd[1]: Starting audit-rules.service... Dec 13 14:29:00.706474 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:29:00.710250 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:29:00.715884 systemd[1]: Starting systemd-resolved.service... Dec 13 14:29:00.722057 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:29:00.732832 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:29:00.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.744714 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:29:00.748109 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:00.755420 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.756085 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.762528 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:00.769826 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:00.775249 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:00.776405 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.776628 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.776813 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:00.776931 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.778368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:00.781976 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:00.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.788944 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:00.789188 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:00.791945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:00.861000 audit[1760]: SYSTEM_BOOT pid=1760 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.792350 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:00.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.793905 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.794456 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.797229 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:29:00.799047 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.799243 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.799407 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:00.799553 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:00.800856 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.818438 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.820264 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.827392 systemd[1]: Starting modprobe@drm.service... Dec 13 14:29:00.831361 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:29:00.838887 systemd[1]: Starting modprobe@loop.service... Dec 13 14:29:00.840037 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.840238 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:00.840447 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:29:00.840612 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:29:00.842910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:29:00.845142 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:29:00.847496 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:29:00.848085 systemd[1]: Finished modprobe@drm.service. Dec 13 14:29:00.866686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:29:00.867113 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:29:00.869728 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:29:00.875159 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:29:00.884241 systemd[1]: Finished ensure-sysext.service. Dec 13 14:29:00.900621 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:29:00.900906 systemd[1]: Finished modprobe@loop.service. Dec 13 14:29:00.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.902124 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:29:00.960406 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:29:00.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:29:00.989000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:29:00.989000 audit[1793]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa71fc360 a2=420 a3=0 items=0 ppid=1754 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:29:00.989000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:29:00.991526 augenrules[1793]: No rules Dec 13 14:29:00.992442 systemd[1]: Finished audit-rules.service. Dec 13 14:29:01.077355 systemd-resolved[1757]: Positive Trust Anchors: Dec 13 14:29:01.077374 systemd-resolved[1757]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:29:01.077416 systemd-resolved[1757]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:29:01.080338 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:29:01.085379 systemd[1]: Reached target time-set.target. Dec 13 14:29:01.122917 systemd-resolved[1757]: Defaulting to hostname 'linux'. Dec 13 14:29:01.125603 systemd[1]: Started systemd-resolved.service. Dec 13 14:29:01.126683 systemd[1]: Reached target network.target. Dec 13 14:29:01.127715 systemd[1]: Reached target network-online.target. Dec 13 14:29:01.128705 systemd[1]: Reached target nss-lookup.target. Dec 13 14:29:01.161314 ldconfig[1623]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:29:01.179048 systemd[1]: Finished ldconfig.service. Dec 13 14:29:01.182924 systemd[1]: Starting systemd-update-done.service... Dec 13 14:29:01.195053 systemd[1]: Finished systemd-update-done.service. Dec 13 14:29:01.196371 systemd[1]: Reached target sysinit.target. Dec 13 14:29:01.197546 systemd[1]: Started motdgen.path. Dec 13 14:29:01.198347 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:29:01.199936 systemd[1]: Started logrotate.timer. Dec 13 14:29:01.200846 systemd[1]: Started mdadm.timer. Dec 13 14:29:01.201580 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:29:01.202488 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:29:01.202538 systemd[1]: Reached target paths.target. Dec 13 14:29:01.203314 systemd[1]: Reached target timers.target. Dec 13 14:29:01.204738 systemd[1]: Listening on dbus.socket. Dec 13 14:29:01.207393 systemd[1]: Starting docker.socket... Dec 13 14:29:01.211335 systemd[1]: Listening on sshd.socket. Dec 13 14:29:01.212362 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:01.213315 systemd[1]: Listening on docker.socket. Dec 13 14:29:01.214140 systemd[1]: Reached target sockets.target. Dec 13 14:29:01.214956 systemd[1]: Reached target basic.target. Dec 13 14:29:01.216093 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:29:01.216260 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:29:01.216391 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:29:01.218424 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:29:01.221794 systemd[1]: Starting containerd.service... Dec 13 14:29:01.225980 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:29:01.229242 systemd[1]: Starting dbus.service... Dec 13 14:29:01.233132 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:29:01.236170 systemd[1]: Starting extend-filesystems.service... Dec 13 14:29:01.237337 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:29:01.241985 systemd[1]: Starting kubelet.service... Dec 13 14:29:01.247837 systemd[1]: Starting motdgen.service... Dec 13 14:29:01.256396 jq[1811]: false Dec 13 14:29:01.262023 systemd-timesyncd[1759]: Contacted time server 71.123.46.185:123 (0.flatcar.pool.ntp.org). Dec 13 14:29:01.262097 systemd-timesyncd[1759]: Initial clock synchronization to Fri 2024-12-13 14:29:01.320451 UTC. Dec 13 14:29:01.262552 systemd[1]: Started nvidia.service. Dec 13 14:29:01.271144 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:29:01.275671 systemd[1]: Starting sshd-keygen.service... Dec 13 14:29:01.284337 systemd[1]: Starting systemd-logind.service... Dec 13 14:29:01.286820 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:29:01.286936 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:29:01.297334 systemd[1]: Starting update-engine.service... Dec 13 14:29:01.310751 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:29:01.315018 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:29:01.315392 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:29:01.340568 jq[1823]: true Dec 13 14:29:01.352511 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:29:01.352856 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:29:01.475581 jq[1833]: true Dec 13 14:29:01.696409 extend-filesystems[1812]: Found loop1 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p1 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p2 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p3 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found usr Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p4 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p6 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p7 Dec 13 14:29:01.696409 extend-filesystems[1812]: Found nvme0n1p9 Dec 13 14:29:01.696409 extend-filesystems[1812]: Checking size of /dev/nvme0n1p9 Dec 13 14:29:01.887932 dbus-daemon[1810]: [system] SELinux support is enabled Dec 13 14:29:01.888313 systemd[1]: Started dbus.service. Dec 13 14:29:01.899663 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:29:01.899757 systemd[1]: Reached target system-config.target. Dec 13 14:29:01.901394 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:29:01.901428 systemd[1]: Reached target user-config.target. Dec 13 14:29:01.909064 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:29:01.910034 systemd[1]: Finished motdgen.service. Dec 13 14:29:01.933935 dbus-daemon[1810]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1510 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:29:01.954801 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:29:02.001569 extend-filesystems[1812]: Resized partition /dev/nvme0n1p9 Dec 13 14:29:02.021493 amazon-ssm-agent[1806]: 2024/12/13 14:29:02 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:29:02.035098 amazon-ssm-agent[1806]: Initializing new seelog logger Dec 13 14:29:02.035098 amazon-ssm-agent[1806]: New Seelog Logger Creation Complete Dec 13 14:29:02.035098 amazon-ssm-agent[1806]: 2024/12/13 14:29:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:29:02.035098 amazon-ssm-agent[1806]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:29:02.035098 amazon-ssm-agent[1806]: 2024/12/13 14:29:02 processing appconfig overrides Dec 13 14:29:02.036960 extend-filesystems[1876]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:29:02.051731 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:29:02.206038 update_engine[1821]: I1213 14:29:02.166651 1821 main.cc:92] Flatcar Update Engine starting Dec 13 14:29:02.206038 update_engine[1821]: I1213 14:29:02.201864 1821 update_check_scheduler.cc:74] Next update check in 7m42s Dec 13 14:29:02.198420 systemd[1]: Started update-engine.service. Dec 13 14:29:02.204587 systemd[1]: Started locksmithd.service. Dec 13 14:29:02.221685 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:29:02.268133 env[1840]: time="2024-12-13T14:29:02.265497200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:29:02.274012 extend-filesystems[1876]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:29:02.274012 extend-filesystems[1876]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:29:02.274012 extend-filesystems[1876]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:29:02.304735 bash[1881]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:29:02.271870 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:29:02.305515 extend-filesystems[1812]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:29:02.272486 systemd[1]: Finished extend-filesystems.service. Dec 13 14:29:02.276862 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:29:02.576785 env[1840]: time="2024-12-13T14:29:02.575293058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:29:02.576785 env[1840]: time="2024-12-13T14:29:02.575569405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:02.604400 systemd-logind[1820]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:29:02.604436 systemd-logind[1820]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:29:02.604460 systemd-logind[1820]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:29:02.608097 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.610938200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.610985958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.611482117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.611513150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.611536084Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.611552395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.611687092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.611992597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.612221730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:29:02.616048 env[1840]: time="2024-12-13T14:29:02.612246798Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:29:02.618329 env[1840]: time="2024-12-13T14:29:02.612474539Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:29:02.618329 env[1840]: time="2024-12-13T14:29:02.612498974Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:29:02.618411 systemd-logind[1820]: New seat seat0. Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.629748607Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.629817082Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.629841285Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.629924396Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630000919Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630087583Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630111105Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630132541Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630170102Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630202444Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630237946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630260100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:29:02.630492 env[1840]: time="2024-12-13T14:29:02.630447509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:29:02.631160 env[1840]: time="2024-12-13T14:29:02.630605179Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:29:02.631568 env[1840]: time="2024-12-13T14:29:02.631350002Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:29:02.631568 env[1840]: time="2024-12-13T14:29:02.631414001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631568 env[1840]: time="2024-12-13T14:29:02.631439030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:29:02.631568 env[1840]: time="2024-12-13T14:29:02.631527644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631786 env[1840]: time="2024-12-13T14:29:02.631550330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631786 env[1840]: time="2024-12-13T14:29:02.631665876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631786 env[1840]: time="2024-12-13T14:29:02.631700373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631786 env[1840]: time="2024-12-13T14:29:02.631720442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631786 env[1840]: time="2024-12-13T14:29:02.631739781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.631786 env[1840]: time="2024-12-13T14:29:02.631758912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.632168 env[1840]: time="2024-12-13T14:29:02.631793583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.632168 env[1840]: time="2024-12-13T14:29:02.631815740Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:29:02.632335 env[1840]: time="2024-12-13T14:29:02.632169207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.632335 env[1840]: time="2024-12-13T14:29:02.632263386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.632335 env[1840]: time="2024-12-13T14:29:02.632285852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.632335 env[1840]: time="2024-12-13T14:29:02.632303524Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:29:02.632493 env[1840]: time="2024-12-13T14:29:02.632343410Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:29:02.632493 env[1840]: time="2024-12-13T14:29:02.632361587Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:29:02.632493 env[1840]: time="2024-12-13T14:29:02.632389476Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:29:02.632493 env[1840]: time="2024-12-13T14:29:02.632458916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:29:02.634211 systemd[1]: Started systemd-logind.service. Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.632958949Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.633062185Z" level=info msg="Connect containerd service" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.633119173Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.636739744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.640677935Z" level=info msg="Start subscribing containerd event" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.640762292Z" level=info msg="Start recovering state" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.640876817Z" level=info msg="Start event monitor" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.640898060Z" level=info msg="Start snapshots syncer" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.640911765Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.640924793Z" level=info msg="Start streaming server" Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.638967174Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.641211718Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:29:02.643865 env[1840]: time="2024-12-13T14:29:02.641286800Z" level=info msg="containerd successfully booted in 0.423309s" Dec 13 14:29:02.641429 systemd[1]: Started containerd.service. Dec 13 14:29:02.909810 dbus-daemon[1810]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:29:02.909999 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:29:02.916360 dbus-daemon[1810]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1870 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:29:02.922019 systemd[1]: Starting polkit.service... Dec 13 14:29:02.953751 polkitd[1941]: Started polkitd version 121 Dec 13 14:29:02.993406 polkitd[1941]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:29:02.993496 polkitd[1941]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:29:03.007296 polkitd[1941]: Finished loading, compiling and executing 2 rules Dec 13 14:29:03.009733 dbus-daemon[1810]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:29:03.009925 systemd[1]: Started polkit.service. Dec 13 14:29:03.012159 polkitd[1941]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:29:03.019013 coreos-metadata[1808]: Dec 13 14:29:03.018 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:29:03.022455 coreos-metadata[1808]: Dec 13 14:29:03.022 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:29:03.023234 coreos-metadata[1808]: Dec 13 14:29:03.023 INFO Fetch successful Dec 13 14:29:03.023234 coreos-metadata[1808]: Dec 13 14:29:03.023 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:29:03.024041 coreos-metadata[1808]: Dec 13 14:29:03.023 INFO Fetch successful Dec 13 14:29:03.028112 unknown[1808]: wrote ssh authorized keys file for user: core Dec 13 14:29:03.060365 systemd-hostnamed[1870]: Hostname set to (transient) Dec 13 14:29:03.060620 systemd-resolved[1757]: System hostname changed to 'ip-172-31-20-184'. Dec 13 14:29:03.077654 update-ssh-keys[1972]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:29:03.078401 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:29:03.174677 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Create new startup processor Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing bookkeeping folders Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO removing the completed state files Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing healthcheck folders for long running plugins Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing locations for inventory plugin Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing default location for custom inventory Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing default location for file inventory Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Initializing default location for role inventory Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Init the cloudwatchlogs publisher Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:29:03.177341 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:29:03.178146 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:29:03.178146 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:29:03.178146 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO OS: linux, Arch: amd64 Dec 13 14:29:03.178919 amazon-ssm-agent[1806]: datastore file /var/lib/amazon/ssm/i-05e54fc4905ce7396/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:29:03.275168 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:29:03.370383 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:29:03.464862 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:29:03.559316 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:29:03.654032 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:29:03.750829 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [instanceID=i-05e54fc4905ce7396] Starting association polling Dec 13 14:29:03.800931 locksmithd[1885]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:29:03.812012 sshd_keygen[1851]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:29:03.846250 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:29:03.855215 systemd[1]: Finished sshd-keygen.service. Dec 13 14:29:03.858602 systemd[1]: Starting issuegen.service... Dec 13 14:29:03.887541 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:29:03.894672 systemd[1]: Finished issuegen.service. Dec 13 14:29:03.911773 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:29:03.941551 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:29:03.954566 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:29:03.962038 systemd[1]: Started getty@tty1.service. Dec 13 14:29:03.973336 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:29:03.975093 systemd[1]: Reached target getty.target. Dec 13 14:29:04.037339 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:29:04.133021 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:29:04.178852 systemd[1]: Started kubelet.service. Dec 13 14:29:04.180570 systemd[1]: Reached target multi-user.target. Dec 13 14:29:04.193546 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:29:04.228730 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:29:04.229057 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:29:04.229067 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:29:04.233084 systemd[1]: Startup finished in 8.536s (kernel) + 12.559s (userspace) = 21.096s. Dec 13 14:29:04.325312 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:29:04.426682 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:29:04.518221 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:29:04.615017 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:29:04.711808 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-05e54fc4905ce7396, requestId: b04d9542-d2d9-484e-a276-1796499e9bbc Dec 13 14:29:04.809226 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [OfflineService] Starting document processing engine... Dec 13 14:29:04.906485 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:29:05.003902 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:29:05.102378 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [OfflineService] Starting message polling Dec 13 14:29:05.200431 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [OfflineService] Starting send replies to MDS Dec 13 14:29:05.298305 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:29:05.387079 kubelet[2052]: E1213 14:29:05.386993 2052 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:29:05.389061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:29:05.389296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:29:05.396523 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:29:05.495375 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] listening reply. Dec 13 14:29:05.594556 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:29:05.693514 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:29:05.792826 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:29:05.892552 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:29:05.992237 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:29:06.092374 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05e54fc4905ce7396?role=subscribe&stream=input Dec 13 14:29:06.192577 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-05e54fc4905ce7396?role=subscribe&stream=input Dec 13 14:29:06.292817 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:29:06.393530 amazon-ssm-agent[1806]: 2024-12-13 14:29:03 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:29:09.641613 systemd[1]: Created slice system-sshd.slice. Dec 13 14:29:09.643388 systemd[1]: Started sshd@0-172.31.20.184:22-139.178.89.65:50210.service. Dec 13 14:29:09.847835 sshd[2062]: Accepted publickey for core from 139.178.89.65 port 50210 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:09.852694 sshd[2062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:09.900248 systemd[1]: Created slice user-500.slice. Dec 13 14:29:09.905678 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:29:09.923057 systemd-logind[1820]: New session 1 of user core. Dec 13 14:29:09.949522 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:29:09.963032 systemd[1]: Starting user@500.service... Dec 13 14:29:09.971485 (systemd)[2067]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:10.139945 systemd[2067]: Queued start job for default target default.target. Dec 13 14:29:10.141740 systemd[2067]: Reached target paths.target. Dec 13 14:29:10.141776 systemd[2067]: Reached target sockets.target. Dec 13 14:29:10.141794 systemd[2067]: Reached target timers.target. Dec 13 14:29:10.141813 systemd[2067]: Reached target basic.target. Dec 13 14:29:10.141996 systemd[1]: Started user@500.service. Dec 13 14:29:10.143939 systemd[1]: Started session-1.scope. Dec 13 14:29:10.144257 systemd[2067]: Reached target default.target. Dec 13 14:29:10.144502 systemd[2067]: Startup finished in 124ms. Dec 13 14:29:10.308909 systemd[1]: Started sshd@1-172.31.20.184:22-139.178.89.65:50224.service. Dec 13 14:29:10.504707 sshd[2076]: Accepted publickey for core from 139.178.89.65 port 50224 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:10.506436 sshd[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:10.515045 systemd[1]: Started session-2.scope. Dec 13 14:29:10.516092 systemd-logind[1820]: New session 2 of user core. Dec 13 14:29:10.650039 sshd[2076]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:10.655142 systemd[1]: sshd@1-172.31.20.184:22-139.178.89.65:50224.service: Deactivated successfully. Dec 13 14:29:10.658693 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:29:10.661159 systemd-logind[1820]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:29:10.664287 systemd-logind[1820]: Removed session 2. Dec 13 14:29:10.681293 systemd[1]: Started sshd@2-172.31.20.184:22-139.178.89.65:50230.service. Dec 13 14:29:10.852140 sshd[2083]: Accepted publickey for core from 139.178.89.65 port 50230 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:10.854903 sshd[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:10.861659 systemd-logind[1820]: New session 3 of user core. Dec 13 14:29:10.862244 systemd[1]: Started session-3.scope. Dec 13 14:29:10.988291 sshd[2083]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:10.995156 systemd[1]: sshd@2-172.31.20.184:22-139.178.89.65:50230.service: Deactivated successfully. Dec 13 14:29:10.997427 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:29:10.997455 systemd-logind[1820]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:29:10.998930 systemd-logind[1820]: Removed session 3. Dec 13 14:29:11.013487 systemd[1]: Started sshd@3-172.31.20.184:22-139.178.89.65:50242.service. Dec 13 14:29:11.190762 sshd[2090]: Accepted publickey for core from 139.178.89.65 port 50242 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:11.193580 sshd[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:11.206849 systemd-logind[1820]: New session 4 of user core. Dec 13 14:29:11.207555 systemd[1]: Started session-4.scope. Dec 13 14:29:11.361524 sshd[2090]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:11.366068 systemd[1]: sshd@3-172.31.20.184:22-139.178.89.65:50242.service: Deactivated successfully. Dec 13 14:29:11.367542 systemd-logind[1820]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:29:11.367750 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:29:11.369331 systemd-logind[1820]: Removed session 4. Dec 13 14:29:11.386019 systemd[1]: Started sshd@4-172.31.20.184:22-139.178.89.65:50256.service. Dec 13 14:29:11.552001 sshd[2097]: Accepted publickey for core from 139.178.89.65 port 50256 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:11.553509 sshd[2097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:11.561720 systemd[1]: Started session-5.scope. Dec 13 14:29:11.562557 systemd-logind[1820]: New session 5 of user core. Dec 13 14:29:11.739462 sudo[2101]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:29:11.740258 sudo[2101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:29:11.757887 systemd[1]: Starting coreos-metadata.service... Dec 13 14:29:11.863876 coreos-metadata[2105]: Dec 13 14:29:11.862 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:29:11.866008 coreos-metadata[2105]: Dec 13 14:29:11.865 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Dec 13 14:29:11.866774 coreos-metadata[2105]: Dec 13 14:29:11.866 INFO Fetch successful Dec 13 14:29:11.867064 coreos-metadata[2105]: Dec 13 14:29:11.866 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Dec 13 14:29:11.868186 coreos-metadata[2105]: Dec 13 14:29:11.868 INFO Fetch successful Dec 13 14:29:11.868298 coreos-metadata[2105]: Dec 13 14:29:11.868 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Dec 13 14:29:11.868924 coreos-metadata[2105]: Dec 13 14:29:11.868 INFO Fetch successful Dec 13 14:29:11.869070 coreos-metadata[2105]: Dec 13 14:29:11.868 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Dec 13 14:29:11.869715 coreos-metadata[2105]: Dec 13 14:29:11.869 INFO Fetch successful Dec 13 14:29:11.869963 coreos-metadata[2105]: Dec 13 14:29:11.869 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Dec 13 14:29:11.870675 coreos-metadata[2105]: Dec 13 14:29:11.870 INFO Fetch successful Dec 13 14:29:11.870749 coreos-metadata[2105]: Dec 13 14:29:11.870 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Dec 13 14:29:11.871410 coreos-metadata[2105]: Dec 13 14:29:11.871 INFO Fetch successful Dec 13 14:29:11.872150 coreos-metadata[2105]: Dec 13 14:29:11.872 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Dec 13 14:29:11.874540 coreos-metadata[2105]: Dec 13 14:29:11.874 INFO Fetch successful Dec 13 14:29:11.874640 coreos-metadata[2105]: Dec 13 14:29:11.874 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Dec 13 14:29:11.875733 coreos-metadata[2105]: Dec 13 14:29:11.875 INFO Fetch successful Dec 13 14:29:11.893211 systemd[1]: Finished coreos-metadata.service. Dec 13 14:29:13.388603 systemd[1]: Stopped kubelet.service. Dec 13 14:29:13.398198 systemd[1]: Starting kubelet.service... Dec 13 14:29:13.434025 systemd[1]: Reloading. Dec 13 14:29:13.533283 /usr/lib/systemd/system-generators/torcx-generator[2167]: time="2024-12-13T14:29:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:29:13.533371 /usr/lib/systemd/system-generators/torcx-generator[2167]: time="2024-12-13T14:29:13Z" level=info msg="torcx already run" Dec 13 14:29:13.708020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:29:13.708044 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:29:13.733422 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:29:13.884808 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:29:13.884947 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:29:13.885358 systemd[1]: Stopped kubelet.service. Dec 13 14:29:13.889448 systemd[1]: Starting kubelet.service... Dec 13 14:29:14.416465 systemd[1]: Started kubelet.service. Dec 13 14:29:14.483041 kubelet[2235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:29:14.483452 kubelet[2235]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:29:14.483505 kubelet[2235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:29:14.483642 kubelet[2235]: I1213 14:29:14.483614 2235 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:29:14.973153 kubelet[2235]: I1213 14:29:14.973116 2235 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:29:14.973153 kubelet[2235]: I1213 14:29:14.973148 2235 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:29:14.973429 kubelet[2235]: I1213 14:29:14.973407 2235 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:29:15.027861 kubelet[2235]: I1213 14:29:15.027806 2235 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:29:15.074690 kubelet[2235]: I1213 14:29:15.074651 2235 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:29:15.076086 kubelet[2235]: I1213 14:29:15.076052 2235 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:29:15.076316 kubelet[2235]: I1213 14:29:15.076291 2235 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:29:15.076943 kubelet[2235]: I1213 14:29:15.076919 2235 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:29:15.077019 kubelet[2235]: I1213 14:29:15.076951 2235 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:29:15.077132 kubelet[2235]: I1213 14:29:15.077114 2235 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:29:15.077258 kubelet[2235]: I1213 14:29:15.077241 2235 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:29:15.077322 kubelet[2235]: I1213 14:29:15.077268 2235 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:29:15.077322 kubelet[2235]: I1213 14:29:15.077300 2235 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:29:15.077322 kubelet[2235]: I1213 14:29:15.077320 2235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:29:15.077895 kubelet[2235]: E1213 14:29:15.077864 2235 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:15.077976 kubelet[2235]: E1213 14:29:15.077930 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:15.079375 kubelet[2235]: I1213 14:29:15.079361 2235 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:29:15.083706 kubelet[2235]: I1213 14:29:15.083665 2235 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:29:15.084964 kubelet[2235]: W1213 14:29:15.084937 2235 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:29:15.085718 kubelet[2235]: I1213 14:29:15.085697 2235 server.go:1256] "Started kubelet" Dec 13 14:29:15.088676 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:29:15.088756 kubelet[2235]: I1213 14:29:15.088640 2235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:29:15.097339 kubelet[2235]: I1213 14:29:15.097302 2235 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:29:15.098308 kubelet[2235]: I1213 14:29:15.098290 2235 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:29:15.099548 kubelet[2235]: I1213 14:29:15.099522 2235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:29:15.099750 kubelet[2235]: I1213 14:29:15.099728 2235 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:29:15.099852 kubelet[2235]: I1213 14:29:15.099838 2235 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:29:15.101723 kubelet[2235]: I1213 14:29:15.101703 2235 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:29:15.101927 kubelet[2235]: I1213 14:29:15.101916 2235 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:29:15.103238 kubelet[2235]: E1213 14:29:15.103217 2235 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:29:15.103460 kubelet[2235]: I1213 14:29:15.103450 2235 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:29:15.103692 kubelet[2235]: I1213 14:29:15.103668 2235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:29:15.106257 kubelet[2235]: I1213 14:29:15.106246 2235 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:29:15.121675 kubelet[2235]: E1213 14:29:15.121566 2235 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.184.1810c2e744d48f3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.184,UID:172.31.20.184,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.20.184,},FirstTimestamp:2024-12-13 14:29:15.085655868 +0000 UTC m=+0.657192769,LastTimestamp:2024-12-13 14:29:15.085655868 +0000 UTC m=+0.657192769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.184,}" Dec 13 14:29:15.122139 kubelet[2235]: W1213 14:29:15.122119 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.20.184" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:29:15.122252 kubelet[2235]: E1213 14:29:15.122242 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.20.184" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:29:15.122395 kubelet[2235]: W1213 14:29:15.122384 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:29:15.122479 kubelet[2235]: E1213 14:29:15.122471 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:29:15.123185 kubelet[2235]: W1213 14:29:15.123167 2235 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:29:15.123315 kubelet[2235]: E1213 14:29:15.123303 2235 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:29:15.123446 kubelet[2235]: E1213 14:29:15.123435 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.184\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:29:15.128557 kubelet[2235]: E1213 14:29:15.128530 2235 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.184.1810c2e745e0478e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.184,UID:172.31.20.184,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.20.184,},FirstTimestamp:2024-12-13 14:29:15.103201166 +0000 UTC m=+0.674738059,LastTimestamp:2024-12-13 14:29:15.103201166 +0000 UTC m=+0.674738059,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.184,}" Dec 13 14:29:15.185810 kubelet[2235]: I1213 14:29:15.185776 2235 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:29:15.185810 kubelet[2235]: I1213 14:29:15.185810 2235 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:29:15.186005 kubelet[2235]: I1213 14:29:15.185836 2235 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:29:15.189165 kubelet[2235]: I1213 14:29:15.189131 2235 policy_none.go:49] "None policy: Start" Dec 13 14:29:15.192904 kubelet[2235]: I1213 14:29:15.192877 2235 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:29:15.193054 kubelet[2235]: I1213 14:29:15.192918 2235 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:29:15.204705 kubelet[2235]: I1213 14:29:15.202969 2235 kubelet_node_status.go:73] "Attempting to register node" node="172.31.20.184" Dec 13 14:29:15.211569 kubelet[2235]: I1213 14:29:15.211411 2235 kubelet_node_status.go:76] "Successfully registered node" node="172.31.20.184" Dec 13 14:29:15.213014 kubelet[2235]: I1213 14:29:15.212985 2235 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:29:15.213356 kubelet[2235]: I1213 14:29:15.213335 2235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:29:15.216085 kubelet[2235]: E1213 14:29:15.216069 2235 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.184\" not found" Dec 13 14:29:15.257879 kubelet[2235]: E1213 14:29:15.253769 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.359540 kubelet[2235]: E1213 14:29:15.359501 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.415202 kubelet[2235]: I1213 14:29:15.415165 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:29:15.416878 kubelet[2235]: I1213 14:29:15.416842 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:29:15.416878 kubelet[2235]: I1213 14:29:15.416886 2235 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:29:15.417049 kubelet[2235]: I1213 14:29:15.416908 2235 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:29:15.417049 kubelet[2235]: E1213 14:29:15.416962 2235 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:29:15.460671 kubelet[2235]: E1213 14:29:15.460627 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.561705 kubelet[2235]: E1213 14:29:15.561541 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.662505 kubelet[2235]: E1213 14:29:15.662455 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.763335 kubelet[2235]: E1213 14:29:15.763288 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.864039 kubelet[2235]: E1213 14:29:15.863921 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.944837 sudo[2101]: pam_unix(sudo:session): session closed for user root Dec 13 14:29:15.964769 kubelet[2235]: E1213 14:29:15.964726 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:15.974652 sshd[2097]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:15.976371 kubelet[2235]: I1213 14:29:15.976344 2235 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:29:15.976904 kubelet[2235]: W1213 14:29:15.976880 2235 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:29:15.985279 systemd[1]: sshd@4-172.31.20.184:22-139.178.89.65:50256.service: Deactivated successfully. Dec 13 14:29:15.992934 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:29:15.993571 systemd-logind[1820]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:29:15.999136 systemd-logind[1820]: Removed session 5. Dec 13 14:29:16.065888 kubelet[2235]: E1213 14:29:16.065836 2235 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.184\" not found" Dec 13 14:29:16.078360 kubelet[2235]: E1213 14:29:16.078279 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:16.171037 kubelet[2235]: I1213 14:29:16.170471 2235 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:29:16.171427 env[1840]: time="2024-12-13T14:29:16.171380746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:29:16.172077 kubelet[2235]: I1213 14:29:16.172055 2235 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:29:17.078952 kubelet[2235]: I1213 14:29:17.078913 2235 apiserver.go:52] "Watching apiserver" Dec 13 14:29:17.079459 kubelet[2235]: E1213 14:29:17.079227 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:17.084879 kubelet[2235]: I1213 14:29:17.084838 2235 topology_manager.go:215] "Topology Admit Handler" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" podNamespace="kube-system" podName="cilium-rdpmn" Dec 13 14:29:17.085315 kubelet[2235]: I1213 14:29:17.085135 2235 topology_manager.go:215] "Topology Admit Handler" podUID="f7a5311c-711d-4e16-bbf1-3c3524b07768" podNamespace="kube-system" podName="kube-proxy-98wnl" Dec 13 14:29:17.102769 kubelet[2235]: I1213 14:29:17.102721 2235 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:29:17.122882 kubelet[2235]: I1213 14:29:17.122843 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-config-path\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123092 kubelet[2235]: I1213 14:29:17.123076 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-hubble-tls\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123210 kubelet[2235]: I1213 14:29:17.123198 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k7gd\" (UniqueName: \"kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-kube-api-access-9k7gd\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123613 kubelet[2235]: I1213 14:29:17.123573 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwm2c\" (UniqueName: \"kubernetes.io/projected/f7a5311c-711d-4e16-bbf1-3c3524b07768-kube-api-access-wwm2c\") pod \"kube-proxy-98wnl\" (UID: \"f7a5311c-711d-4e16-bbf1-3c3524b07768\") " pod="kube-system/kube-proxy-98wnl" Dec 13 14:29:17.123695 kubelet[2235]: I1213 14:29:17.123643 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-etc-cni-netd\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123695 kubelet[2235]: I1213 14:29:17.123676 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cni-path\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123782 kubelet[2235]: I1213 14:29:17.123716 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-xtables-lock\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123782 kubelet[2235]: I1213 14:29:17.123747 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7a5311c-711d-4e16-bbf1-3c3524b07768-kube-proxy\") pod \"kube-proxy-98wnl\" (UID: \"f7a5311c-711d-4e16-bbf1-3c3524b07768\") " pod="kube-system/kube-proxy-98wnl" Dec 13 14:29:17.123782 kubelet[2235]: I1213 14:29:17.123779 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7a5311c-711d-4e16-bbf1-3c3524b07768-xtables-lock\") pod \"kube-proxy-98wnl\" (UID: \"f7a5311c-711d-4e16-bbf1-3c3524b07768\") " pod="kube-system/kube-proxy-98wnl" Dec 13 14:29:17.123914 kubelet[2235]: I1213 14:29:17.123812 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-hostproc\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123914 kubelet[2235]: I1213 14:29:17.123843 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-net\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123914 kubelet[2235]: I1213 14:29:17.123873 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-cgroup\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.123914 kubelet[2235]: I1213 14:29:17.123908 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-bpf-maps\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.124072 kubelet[2235]: I1213 14:29:17.123937 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-lib-modules\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.124072 kubelet[2235]: I1213 14:29:17.123967 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99cebb23-8059-4659-af4b-2b2ef38bf93f-clustermesh-secrets\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.124072 kubelet[2235]: I1213 14:29:17.123998 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-kernel\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.124072 kubelet[2235]: I1213 14:29:17.124027 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7a5311c-711d-4e16-bbf1-3c3524b07768-lib-modules\") pod \"kube-proxy-98wnl\" (UID: \"f7a5311c-711d-4e16-bbf1-3c3524b07768\") " pod="kube-system/kube-proxy-98wnl" Dec 13 14:29:17.124072 kubelet[2235]: I1213 14:29:17.124066 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-run\") pod \"cilium-rdpmn\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " pod="kube-system/cilium-rdpmn" Dec 13 14:29:17.392384 env[1840]: time="2024-12-13T14:29:17.392253505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98wnl,Uid:f7a5311c-711d-4e16-bbf1-3c3524b07768,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:17.406788 env[1840]: time="2024-12-13T14:29:17.406726784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdpmn,Uid:99cebb23-8059-4659-af4b-2b2ef38bf93f,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:17.989000 env[1840]: time="2024-12-13T14:29:17.988943563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:17.994232 env[1840]: time="2024-12-13T14:29:17.994182866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:17.996231 env[1840]: time="2024-12-13T14:29:17.996184187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:17.999315 env[1840]: time="2024-12-13T14:29:17.999269150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.000668 env[1840]: time="2024-12-13T14:29:18.000629305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.002044 env[1840]: time="2024-12-13T14:29:18.002006260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.003147 env[1840]: time="2024-12-13T14:29:18.003115108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.006544 env[1840]: time="2024-12-13T14:29:18.006503361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:18.052324 env[1840]: time="2024-12-13T14:29:18.052230462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:18.052933 env[1840]: time="2024-12-13T14:29:18.052547130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:18.052933 env[1840]: time="2024-12-13T14:29:18.052733917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:18.054813 env[1840]: time="2024-12-13T14:29:18.054759708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991 pid=2290 runtime=io.containerd.runc.v2 Dec 13 14:29:18.071019 env[1840]: time="2024-12-13T14:29:18.070474597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:18.071019 env[1840]: time="2024-12-13T14:29:18.070530236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:18.071019 env[1840]: time="2024-12-13T14:29:18.070548964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:18.071398 env[1840]: time="2024-12-13T14:29:18.071326339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac6ab548cb6a0f5637f6dd8738b4ed955c7174b848cfa8bae8a4fd8b9429e847 pid=2311 runtime=io.containerd.runc.v2 Dec 13 14:29:18.079842 kubelet[2235]: E1213 14:29:18.079306 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:18.136684 env[1840]: time="2024-12-13T14:29:18.136573107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rdpmn,Uid:99cebb23-8059-4659-af4b-2b2ef38bf93f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\"" Dec 13 14:29:18.144935 env[1840]: time="2024-12-13T14:29:18.144890101Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:29:18.155191 env[1840]: time="2024-12-13T14:29:18.155138140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98wnl,Uid:f7a5311c-711d-4e16-bbf1-3c3524b07768,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac6ab548cb6a0f5637f6dd8738b4ed955c7174b848cfa8bae8a4fd8b9429e847\"" Dec 13 14:29:18.245911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495043170.mount: Deactivated successfully. Dec 13 14:29:18.316744 amazon-ssm-agent[1806]: 2024-12-13 14:29:18 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:29:19.080511 kubelet[2235]: E1213 14:29:19.080465 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:20.081095 kubelet[2235]: E1213 14:29:20.081055 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:21.081812 kubelet[2235]: E1213 14:29:21.081771 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:22.081967 kubelet[2235]: E1213 14:29:22.081929 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:23.082836 kubelet[2235]: E1213 14:29:23.082793 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:24.083919 kubelet[2235]: E1213 14:29:24.083881 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:25.085162 kubelet[2235]: E1213 14:29:25.085114 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:26.085566 kubelet[2235]: E1213 14:29:26.085521 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:27.086477 kubelet[2235]: E1213 14:29:27.086400 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:28.087002 kubelet[2235]: E1213 14:29:28.086929 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:29.087286 kubelet[2235]: E1213 14:29:29.087196 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:30.088163 kubelet[2235]: E1213 14:29:30.088088 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:31.089273 kubelet[2235]: E1213 14:29:31.089193 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:31.578446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678510235.mount: Deactivated successfully. Dec 13 14:29:32.090352 kubelet[2235]: E1213 14:29:32.090273 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:33.090758 kubelet[2235]: E1213 14:29:33.090685 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:33.093372 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:29:34.091722 kubelet[2235]: E1213 14:29:34.091651 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:35.077793 kubelet[2235]: E1213 14:29:35.077674 2235 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:35.092071 kubelet[2235]: E1213 14:29:35.092011 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:36.092917 kubelet[2235]: E1213 14:29:36.092868 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:36.629174 env[1840]: time="2024-12-13T14:29:36.629068828Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:36.686038 env[1840]: time="2024-12-13T14:29:36.685991075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:36.724727 env[1840]: time="2024-12-13T14:29:36.724675729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:36.725767 env[1840]: time="2024-12-13T14:29:36.725721262Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:29:36.727353 env[1840]: time="2024-12-13T14:29:36.727317411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:29:36.728560 env[1840]: time="2024-12-13T14:29:36.728518179Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:29:36.792297 env[1840]: time="2024-12-13T14:29:36.792235364Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\"" Dec 13 14:29:36.793778 env[1840]: time="2024-12-13T14:29:36.793647847Z" level=info msg="StartContainer for \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\"" Dec 13 14:29:36.865846 env[1840]: time="2024-12-13T14:29:36.862785383Z" level=info msg="StartContainer for \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\" returns successfully" Dec 13 14:29:37.093243 kubelet[2235]: E1213 14:29:37.093200 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:37.694367 env[1840]: time="2024-12-13T14:29:37.694307392Z" level=info msg="shim disconnected" id=297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48 Dec 13 14:29:37.695014 env[1840]: time="2024-12-13T14:29:37.694371758Z" level=warning msg="cleaning up after shim disconnected" id=297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48 namespace=k8s.io Dec 13 14:29:37.695014 env[1840]: time="2024-12-13T14:29:37.694385225Z" level=info msg="cleaning up dead shim" Dec 13 14:29:37.704722 env[1840]: time="2024-12-13T14:29:37.704680852Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2424 runtime=io.containerd.runc.v2\n" Dec 13 14:29:37.781138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48-rootfs.mount: Deactivated successfully. Dec 13 14:29:38.094204 kubelet[2235]: E1213 14:29:38.094150 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:38.479401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933718538.mount: Deactivated successfully. Dec 13 14:29:38.516059 env[1840]: time="2024-12-13T14:29:38.516004361Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:29:38.556505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130018892.mount: Deactivated successfully. Dec 13 14:29:38.581749 env[1840]: time="2024-12-13T14:29:38.581692597Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\"" Dec 13 14:29:38.582694 env[1840]: time="2024-12-13T14:29:38.582642746Z" level=info msg="StartContainer for \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\"" Dec 13 14:29:38.660211 env[1840]: time="2024-12-13T14:29:38.645779175Z" level=info msg="StartContainer for \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\" returns successfully" Dec 13 14:29:38.666410 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:29:38.671871 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:29:38.672063 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:29:38.674203 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:29:38.688613 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:29:39.002726 env[1840]: time="2024-12-13T14:29:39.002578186Z" level=info msg="shim disconnected" id=370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2 Dec 13 14:29:39.003542 env[1840]: time="2024-12-13T14:29:39.003502260Z" level=warning msg="cleaning up after shim disconnected" id=370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2 namespace=k8s.io Dec 13 14:29:39.003697 env[1840]: time="2024-12-13T14:29:39.003680814Z" level=info msg="cleaning up dead shim" Dec 13 14:29:39.027817 env[1840]: time="2024-12-13T14:29:39.027771641Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2490 runtime=io.containerd.runc.v2\n" Dec 13 14:29:39.095148 kubelet[2235]: E1213 14:29:39.095062 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:39.319501 env[1840]: time="2024-12-13T14:29:39.319444923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:39.322866 env[1840]: time="2024-12-13T14:29:39.322820357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:39.325653 env[1840]: time="2024-12-13T14:29:39.325615102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:39.327603 env[1840]: time="2024-12-13T14:29:39.327538387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:39.327979 env[1840]: time="2024-12-13T14:29:39.327948139Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:29:39.330469 env[1840]: time="2024-12-13T14:29:39.330371338Z" level=info msg="CreateContainer within sandbox \"ac6ab548cb6a0f5637f6dd8738b4ed955c7174b848cfa8bae8a4fd8b9429e847\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:29:39.356075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864488161.mount: Deactivated successfully. Dec 13 14:29:39.376657 env[1840]: time="2024-12-13T14:29:39.376581299Z" level=info msg="CreateContainer within sandbox \"ac6ab548cb6a0f5637f6dd8738b4ed955c7174b848cfa8bae8a4fd8b9429e847\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d194a68448ca3ef5d39895a018fb68df31f5f241dadadfa34706b0263332f97\"" Dec 13 14:29:39.377390 env[1840]: time="2024-12-13T14:29:39.377339612Z" level=info msg="StartContainer for \"1d194a68448ca3ef5d39895a018fb68df31f5f241dadadfa34706b0263332f97\"" Dec 13 14:29:39.450176 env[1840]: time="2024-12-13T14:29:39.450098947Z" level=info msg="StartContainer for \"1d194a68448ca3ef5d39895a018fb68df31f5f241dadadfa34706b0263332f97\" returns successfully" Dec 13 14:29:39.507978 env[1840]: time="2024-12-13T14:29:39.507926015Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:29:39.546602 env[1840]: time="2024-12-13T14:29:39.546531969Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\"" Dec 13 14:29:39.547288 env[1840]: time="2024-12-13T14:29:39.547251948Z" level=info msg="StartContainer for \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\"" Dec 13 14:29:39.659164 env[1840]: time="2024-12-13T14:29:39.658409079Z" level=info msg="StartContainer for \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\" returns successfully" Dec 13 14:29:39.945124 env[1840]: time="2024-12-13T14:29:39.944981206Z" level=info msg="shim disconnected" id=532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc Dec 13 14:29:39.945124 env[1840]: time="2024-12-13T14:29:39.945044313Z" level=warning msg="cleaning up after shim disconnected" id=532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc namespace=k8s.io Dec 13 14:29:39.945124 env[1840]: time="2024-12-13T14:29:39.945059011Z" level=info msg="cleaning up dead shim" Dec 13 14:29:39.960102 env[1840]: time="2024-12-13T14:29:39.960050432Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2712 runtime=io.containerd.runc.v2\n" Dec 13 14:29:40.095627 kubelet[2235]: E1213 14:29:40.095567 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:40.527970 env[1840]: time="2024-12-13T14:29:40.527769925Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:29:40.569333 kubelet[2235]: I1213 14:29:40.569119 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-98wnl" podStartSLOduration=4.3973130000000005 podStartE2EDuration="25.569033011s" podCreationTimestamp="2024-12-13 14:29:15 +0000 UTC" firstStartedPulling="2024-12-13 14:29:18.156545408 +0000 UTC m=+3.728082286" lastFinishedPulling="2024-12-13 14:29:39.328265412 +0000 UTC m=+24.899802297" observedRunningTime="2024-12-13 14:29:39.549401362 +0000 UTC m=+25.120938263" watchObservedRunningTime="2024-12-13 14:29:40.569033011 +0000 UTC m=+26.140569915" Dec 13 14:29:40.577691 env[1840]: time="2024-12-13T14:29:40.577417040Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\"" Dec 13 14:29:40.580075 env[1840]: time="2024-12-13T14:29:40.579961145Z" level=info msg="StartContainer for \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\"" Dec 13 14:29:40.674267 env[1840]: time="2024-12-13T14:29:40.674222064Z" level=info msg="StartContainer for \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\" returns successfully" Dec 13 14:29:40.704173 env[1840]: time="2024-12-13T14:29:40.704047838Z" level=info msg="shim disconnected" id=42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8 Dec 13 14:29:40.704173 env[1840]: time="2024-12-13T14:29:40.704172984Z" level=warning msg="cleaning up after shim disconnected" id=42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8 namespace=k8s.io Dec 13 14:29:40.704967 env[1840]: time="2024-12-13T14:29:40.704189777Z" level=info msg="cleaning up dead shim" Dec 13 14:29:40.730251 env[1840]: time="2024-12-13T14:29:40.730204540Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:29:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2768 runtime=io.containerd.runc.v2\n" Dec 13 14:29:40.780243 systemd[1]: run-containerd-runc-k8s.io-42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8-runc.v6EsNT.mount: Deactivated successfully. Dec 13 14:29:40.780444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8-rootfs.mount: Deactivated successfully. Dec 13 14:29:41.096646 kubelet[2235]: E1213 14:29:41.096597 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:41.526229 env[1840]: time="2024-12-13T14:29:41.526076452Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:29:41.555596 env[1840]: time="2024-12-13T14:29:41.555536501Z" level=info msg="CreateContainer within sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\"" Dec 13 14:29:41.558951 env[1840]: time="2024-12-13T14:29:41.558909134Z" level=info msg="StartContainer for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\"" Dec 13 14:29:41.677474 env[1840]: time="2024-12-13T14:29:41.674399343Z" level=info msg="StartContainer for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" returns successfully" Dec 13 14:29:41.915006 kubelet[2235]: I1213 14:29:41.913638 2235 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:29:42.097516 kubelet[2235]: E1213 14:29:42.097478 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:42.250625 kernel: Initializing XFRM netlink socket Dec 13 14:29:43.097742 kubelet[2235]: E1213 14:29:43.097687 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:44.026572 systemd-networkd[1510]: cilium_host: Link UP Dec 13 14:29:44.030269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:29:44.030393 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:29:44.030734 systemd-networkd[1510]: cilium_net: Link UP Dec 13 14:29:44.031074 systemd-networkd[1510]: cilium_net: Gained carrier Dec 13 14:29:44.032062 systemd-networkd[1510]: cilium_host: Gained carrier Dec 13 14:29:44.032205 (udev-worker)[2649]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:44.034982 (udev-worker)[2648]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:44.099258 kubelet[2235]: E1213 14:29:44.099200 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:44.297145 (udev-worker)[2926]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:44.302175 systemd-networkd[1510]: cilium_vxlan: Link UP Dec 13 14:29:44.302183 systemd-networkd[1510]: cilium_vxlan: Gained carrier Dec 13 14:29:44.402051 kubelet[2235]: I1213 14:29:44.402002 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rdpmn" podStartSLOduration=10.817863238 podStartE2EDuration="29.401921901s" podCreationTimestamp="2024-12-13 14:29:15 +0000 UTC" firstStartedPulling="2024-12-13 14:29:18.14215131 +0000 UTC m=+3.713688190" lastFinishedPulling="2024-12-13 14:29:36.726209974 +0000 UTC m=+22.297746853" observedRunningTime="2024-12-13 14:29:42.563115009 +0000 UTC m=+28.134651911" watchObservedRunningTime="2024-12-13 14:29:44.401921901 +0000 UTC m=+29.973458800" Dec 13 14:29:44.406448 kubelet[2235]: I1213 14:29:44.406366 2235 topology_manager.go:215] "Topology Admit Handler" podUID="544183f2-15f3-4303-b62d-fb4288a33f6d" podNamespace="default" podName="nginx-deployment-6d5f899847-77mck" Dec 13 14:29:44.453276 kubelet[2235]: I1213 14:29:44.453155 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xzgm\" (UniqueName: \"kubernetes.io/projected/544183f2-15f3-4303-b62d-fb4288a33f6d-kube-api-access-8xzgm\") pod \"nginx-deployment-6d5f899847-77mck\" (UID: \"544183f2-15f3-4303-b62d-fb4288a33f6d\") " pod="default/nginx-deployment-6d5f899847-77mck" Dec 13 14:29:44.626612 kernel: NET: Registered PF_ALG protocol family Dec 13 14:29:44.718298 env[1840]: time="2024-12-13T14:29:44.718244728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-77mck,Uid:544183f2-15f3-4303-b62d-fb4288a33f6d,Namespace:default,Attempt:0,}" Dec 13 14:29:44.776706 systemd-networkd[1510]: cilium_host: Gained IPv6LL Dec 13 14:29:44.838277 systemd-networkd[1510]: cilium_net: Gained IPv6LL Dec 13 14:29:45.100043 kubelet[2235]: E1213 14:29:45.099968 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:45.541889 systemd-networkd[1510]: cilium_vxlan: Gained IPv6LL Dec 13 14:29:45.673849 systemd-networkd[1510]: lxc_health: Link UP Dec 13 14:29:45.725119 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:29:45.724332 systemd-networkd[1510]: lxc_health: Gained carrier Dec 13 14:29:46.100262 kubelet[2235]: E1213 14:29:46.100223 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:46.300399 systemd-networkd[1510]: lxc6ba04f7df356: Link UP Dec 13 14:29:46.331650 kernel: eth0: renamed from tmp5f9c7 Dec 13 14:29:46.346042 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6ba04f7df356: link becomes ready Dec 13 14:29:46.339781 systemd-networkd[1510]: lxc6ba04f7df356: Gained carrier Dec 13 14:29:47.101731 kubelet[2235]: E1213 14:29:47.101687 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:47.151177 update_engine[1821]: I1213 14:29:47.150657 1821 update_attempter.cc:509] Updating boot flags... Dec 13 14:29:47.299724 systemd-networkd[1510]: lxc_health: Gained IPv6LL Dec 13 14:29:47.910104 systemd-networkd[1510]: lxc6ba04f7df356: Gained IPv6LL Dec 13 14:29:48.103356 kubelet[2235]: E1213 14:29:48.103288 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:48.349712 amazon-ssm-agent[1806]: 2024-12-13 14:29:48 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:29:49.103879 kubelet[2235]: E1213 14:29:49.103828 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:50.104987 kubelet[2235]: E1213 14:29:50.104911 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:51.105999 kubelet[2235]: E1213 14:29:51.105952 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:52.106495 kubelet[2235]: E1213 14:29:52.106450 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:52.429286 env[1840]: time="2024-12-13T14:29:52.429139674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:52.429286 env[1840]: time="2024-12-13T14:29:52.429181918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:52.429286 env[1840]: time="2024-12-13T14:29:52.429198583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:52.430141 env[1840]: time="2024-12-13T14:29:52.430084813Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f9c724033dd51591556d265cb3502d5be09e95713664521260ef6ae333f16af pid=3375 runtime=io.containerd.runc.v2 Dec 13 14:29:52.520486 env[1840]: time="2024-12-13T14:29:52.520438375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-77mck,Uid:544183f2-15f3-4303-b62d-fb4288a33f6d,Namespace:default,Attempt:0,} returns sandbox id \"5f9c724033dd51591556d265cb3502d5be09e95713664521260ef6ae333f16af\"" Dec 13 14:29:52.523263 env[1840]: time="2024-12-13T14:29:52.523224284Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:29:53.107049 kubelet[2235]: E1213 14:29:53.107003 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:54.108102 kubelet[2235]: E1213 14:29:54.108036 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:55.078265 kubelet[2235]: E1213 14:29:55.078225 2235 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:55.108346 kubelet[2235]: E1213 14:29:55.108301 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:55.859653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869401453.mount: Deactivated successfully. Dec 13 14:29:56.108787 kubelet[2235]: E1213 14:29:56.108728 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:57.109370 kubelet[2235]: E1213 14:29:57.109316 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:57.865721 env[1840]: time="2024-12-13T14:29:57.865667431Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:57.870790 env[1840]: time="2024-12-13T14:29:57.870741631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:57.877920 env[1840]: time="2024-12-13T14:29:57.877619396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:57.883399 env[1840]: time="2024-12-13T14:29:57.883356639Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:29:57.885269 env[1840]: time="2024-12-13T14:29:57.885006178Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:29:57.891996 env[1840]: time="2024-12-13T14:29:57.891699101Z" level=info msg="CreateContainer within sandbox \"5f9c724033dd51591556d265cb3502d5be09e95713664521260ef6ae333f16af\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:29:57.929121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750982914.mount: Deactivated successfully. Dec 13 14:29:57.936649 env[1840]: time="2024-12-13T14:29:57.936576684Z" level=info msg="CreateContainer within sandbox \"5f9c724033dd51591556d265cb3502d5be09e95713664521260ef6ae333f16af\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"506a1c6367c56462266a523be74a371210299fb61b42d67a6ed06d0e6916a0e8\"" Dec 13 14:29:57.937515 env[1840]: time="2024-12-13T14:29:57.937479645Z" level=info msg="StartContainer for \"506a1c6367c56462266a523be74a371210299fb61b42d67a6ed06d0e6916a0e8\"" Dec 13 14:29:58.019796 env[1840]: time="2024-12-13T14:29:58.019743601Z" level=info msg="StartContainer for \"506a1c6367c56462266a523be74a371210299fb61b42d67a6ed06d0e6916a0e8\" returns successfully" Dec 13 14:29:58.111999 kubelet[2235]: E1213 14:29:58.111854 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:29:58.611784 kubelet[2235]: I1213 14:29:58.611744 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-77mck" podStartSLOduration=9.248561916 podStartE2EDuration="14.61170846s" podCreationTimestamp="2024-12-13 14:29:44 +0000 UTC" firstStartedPulling="2024-12-13 14:29:52.52238176 +0000 UTC m=+38.093918638" lastFinishedPulling="2024-12-13 14:29:57.885528288 +0000 UTC m=+43.457065182" observedRunningTime="2024-12-13 14:29:58.610905795 +0000 UTC m=+44.182442695" watchObservedRunningTime="2024-12-13 14:29:58.61170846 +0000 UTC m=+44.183245354" Dec 13 14:29:59.112527 kubelet[2235]: E1213 14:29:59.112469 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:00.112750 kubelet[2235]: E1213 14:30:00.112696 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:01.113961 kubelet[2235]: E1213 14:30:01.113691 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:02.114488 kubelet[2235]: E1213 14:30:02.114336 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:03.114577 kubelet[2235]: E1213 14:30:03.114525 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:04.115788 kubelet[2235]: E1213 14:30:04.115739 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:05.116610 kubelet[2235]: E1213 14:30:05.116553 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:06.117335 kubelet[2235]: E1213 14:30:06.117280 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:07.050399 kubelet[2235]: I1213 14:30:07.049971 2235 topology_manager.go:215] "Topology Admit Handler" podUID="49cd0188-7ba2-417a-a353-f0c7b307a00c" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:30:07.118206 kubelet[2235]: E1213 14:30:07.118165 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:07.175529 kubelet[2235]: I1213 14:30:07.175490 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrpp\" (UniqueName: \"kubernetes.io/projected/49cd0188-7ba2-417a-a353-f0c7b307a00c-kube-api-access-xsrpp\") pod \"nfs-server-provisioner-0\" (UID: \"49cd0188-7ba2-417a-a353-f0c7b307a00c\") " pod="default/nfs-server-provisioner-0" Dec 13 14:30:07.176019 kubelet[2235]: I1213 14:30:07.175951 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/49cd0188-7ba2-417a-a353-f0c7b307a00c-data\") pod \"nfs-server-provisioner-0\" (UID: \"49cd0188-7ba2-417a-a353-f0c7b307a00c\") " pod="default/nfs-server-provisioner-0" Dec 13 14:30:07.362258 env[1840]: time="2024-12-13T14:30:07.362084457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:49cd0188-7ba2-417a-a353-f0c7b307a00c,Namespace:default,Attempt:0,}" Dec 13 14:30:07.516017 (udev-worker)[3467]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:07.523113 (udev-worker)[3483]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:07.529696 systemd-networkd[1510]: lxc140fabe9d531: Link UP Dec 13 14:30:07.537756 kernel: eth0: renamed from tmp9ff41 Dec 13 14:30:07.550777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:30:07.551026 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc140fabe9d531: link becomes ready Dec 13 14:30:07.551083 systemd-networkd[1510]: lxc140fabe9d531: Gained carrier Dec 13 14:30:07.802257 env[1840]: time="2024-12-13T14:30:07.802153105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:07.802559 env[1840]: time="2024-12-13T14:30:07.802285804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:07.802559 env[1840]: time="2024-12-13T14:30:07.802317467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:07.804200 env[1840]: time="2024-12-13T14:30:07.804101031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ff41387855e0d5e5a55fcea46c1439ee796c6f37e5f10e9fad945946d6a339e pid=3498 runtime=io.containerd.runc.v2 Dec 13 14:30:07.912030 env[1840]: time="2024-12-13T14:30:07.911561041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:49cd0188-7ba2-417a-a353-f0c7b307a00c,Namespace:default,Attempt:0,} returns sandbox id \"9ff41387855e0d5e5a55fcea46c1439ee796c6f37e5f10e9fad945946d6a339e\"" Dec 13 14:30:07.913898 env[1840]: time="2024-12-13T14:30:07.913859979Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:30:08.119406 kubelet[2235]: E1213 14:30:08.119277 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:08.710040 systemd-networkd[1510]: lxc140fabe9d531: Gained IPv6LL Dec 13 14:30:09.120502 kubelet[2235]: E1213 14:30:09.120454 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:10.121252 kubelet[2235]: E1213 14:30:10.121211 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:11.122313 kubelet[2235]: E1213 14:30:11.122277 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:11.932476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817914396.mount: Deactivated successfully. Dec 13 14:30:12.123195 kubelet[2235]: E1213 14:30:12.123155 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:13.123916 kubelet[2235]: E1213 14:30:13.123854 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:14.124752 kubelet[2235]: E1213 14:30:14.124481 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:14.950800 env[1840]: time="2024-12-13T14:30:14.950742223Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.954145 env[1840]: time="2024-12-13T14:30:14.954096358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.956793 env[1840]: time="2024-12-13T14:30:14.956668907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.959490 env[1840]: time="2024-12-13T14:30:14.959448823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:14.960263 env[1840]: time="2024-12-13T14:30:14.960225666Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:30:14.962570 env[1840]: time="2024-12-13T14:30:14.962534370Z" level=info msg="CreateContainer within sandbox \"9ff41387855e0d5e5a55fcea46c1439ee796c6f37e5f10e9fad945946d6a339e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:30:15.019651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981414149.mount: Deactivated successfully. Dec 13 14:30:15.035252 env[1840]: time="2024-12-13T14:30:15.035184233Z" level=info msg="CreateContainer within sandbox \"9ff41387855e0d5e5a55fcea46c1439ee796c6f37e5f10e9fad945946d6a339e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"84fe21af9207aacc68bf4f14aec0eaaa34501e4b7444f9d72a9bc9e20a781ee1\"" Dec 13 14:30:15.036568 env[1840]: time="2024-12-13T14:30:15.036525980Z" level=info msg="StartContainer for \"84fe21af9207aacc68bf4f14aec0eaaa34501e4b7444f9d72a9bc9e20a781ee1\"" Dec 13 14:30:15.078361 kubelet[2235]: E1213 14:30:15.078299 2235 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:15.136281 kubelet[2235]: E1213 14:30:15.136208 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:15.196255 env[1840]: time="2024-12-13T14:30:15.195065840Z" level=info msg="StartContainer for \"84fe21af9207aacc68bf4f14aec0eaaa34501e4b7444f9d72a9bc9e20a781ee1\" returns successfully" Dec 13 14:30:16.137225 kubelet[2235]: E1213 14:30:16.137075 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:17.138360 kubelet[2235]: E1213 14:30:17.138256 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:18.139236 kubelet[2235]: E1213 14:30:18.139180 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:19.139392 kubelet[2235]: E1213 14:30:19.139353 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:20.139690 kubelet[2235]: E1213 14:30:20.139647 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:21.140570 kubelet[2235]: E1213 14:30:21.140517 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:22.141401 kubelet[2235]: E1213 14:30:22.141345 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:23.142496 kubelet[2235]: E1213 14:30:23.142447 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:24.143377 kubelet[2235]: E1213 14:30:24.143324 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:25.086976 kubelet[2235]: I1213 14:30:25.086922 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.039579405 podStartE2EDuration="19.086760252s" podCreationTimestamp="2024-12-13 14:30:06 +0000 UTC" firstStartedPulling="2024-12-13 14:30:07.913355071 +0000 UTC m=+53.484891952" lastFinishedPulling="2024-12-13 14:30:14.960535906 +0000 UTC m=+60.532072799" observedRunningTime="2024-12-13 14:30:15.692702819 +0000 UTC m=+61.264239719" watchObservedRunningTime="2024-12-13 14:30:25.086760252 +0000 UTC m=+70.658297218" Dec 13 14:30:25.087230 kubelet[2235]: I1213 14:30:25.087079 2235 topology_manager.go:215] "Topology Admit Handler" podUID="b5a8c605-ba5a-4233-ab69-c832883a7b3a" podNamespace="default" podName="test-pod-1" Dec 13 14:30:25.143519 kubelet[2235]: E1213 14:30:25.143482 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:25.237138 kubelet[2235]: I1213 14:30:25.237098 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3a0be03e-e825-48c3-9258-8f30ea2d05d8\" (UniqueName: \"kubernetes.io/nfs/b5a8c605-ba5a-4233-ab69-c832883a7b3a-pvc-3a0be03e-e825-48c3-9258-8f30ea2d05d8\") pod \"test-pod-1\" (UID: \"b5a8c605-ba5a-4233-ab69-c832883a7b3a\") " pod="default/test-pod-1" Dec 13 14:30:25.237467 kubelet[2235]: I1213 14:30:25.237451 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2thp\" (UniqueName: \"kubernetes.io/projected/b5a8c605-ba5a-4233-ab69-c832883a7b3a-kube-api-access-v2thp\") pod \"test-pod-1\" (UID: \"b5a8c605-ba5a-4233-ab69-c832883a7b3a\") " pod="default/test-pod-1" Dec 13 14:30:25.439940 kernel: FS-Cache: Loaded Dec 13 14:30:25.514086 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:30:25.514563 kernel: RPC: Registered udp transport module. Dec 13 14:30:25.514757 kernel: RPC: Registered tcp transport module. Dec 13 14:30:25.516270 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:30:25.704625 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:30:25.917814 kernel: NFS: Registering the id_resolver key type Dec 13 14:30:25.918003 kernel: Key type id_resolver registered Dec 13 14:30:25.918042 kernel: Key type id_legacy registered Dec 13 14:30:25.960039 nfsidmap[3626]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:30:25.964763 nfsidmap[3627]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:30:25.992059 env[1840]: time="2024-12-13T14:30:25.992004026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b5a8c605-ba5a-4233-ab69-c832883a7b3a,Namespace:default,Attempt:0,}" Dec 13 14:30:26.039642 (udev-worker)[3622]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:26.039645 (udev-worker)[3618]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:26.043259 systemd-networkd[1510]: lxcac647e6dadc5: Link UP Dec 13 14:30:26.048628 kernel: eth0: renamed from tmp24bf5 Dec 13 14:30:26.057859 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:30:26.057992 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcac647e6dadc5: link becomes ready Dec 13 14:30:26.058027 systemd-networkd[1510]: lxcac647e6dadc5: Gained carrier Dec 13 14:30:26.144957 kubelet[2235]: E1213 14:30:26.144888 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:26.237869 env[1840]: time="2024-12-13T14:30:26.237190575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:26.237869 env[1840]: time="2024-12-13T14:30:26.237233537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:26.237869 env[1840]: time="2024-12-13T14:30:26.237271777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:26.238339 env[1840]: time="2024-12-13T14:30:26.238251625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24bf5ba4f83b0716b405148797c7f6bd96f85b7c45c00bd64cee0ed61e4dd0f9 pid=3653 runtime=io.containerd.runc.v2 Dec 13 14:30:26.337900 env[1840]: time="2024-12-13T14:30:26.337852030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b5a8c605-ba5a-4233-ab69-c832883a7b3a,Namespace:default,Attempt:0,} returns sandbox id \"24bf5ba4f83b0716b405148797c7f6bd96f85b7c45c00bd64cee0ed61e4dd0f9\"" Dec 13 14:30:26.340277 env[1840]: time="2024-12-13T14:30:26.340239798Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:30:26.647949 env[1840]: time="2024-12-13T14:30:26.647899461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:26.650552 env[1840]: time="2024-12-13T14:30:26.650487438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:26.653002 env[1840]: time="2024-12-13T14:30:26.652960475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:26.655629 env[1840]: time="2024-12-13T14:30:26.655571137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:26.656673 env[1840]: time="2024-12-13T14:30:26.656570421Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:30:26.659321 env[1840]: time="2024-12-13T14:30:26.659283747Z" level=info msg="CreateContainer within sandbox \"24bf5ba4f83b0716b405148797c7f6bd96f85b7c45c00bd64cee0ed61e4dd0f9\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:30:26.756045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888807237.mount: Deactivated successfully. Dec 13 14:30:26.768662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220739070.mount: Deactivated successfully. Dec 13 14:30:26.773940 env[1840]: time="2024-12-13T14:30:26.773895103Z" level=info msg="CreateContainer within sandbox \"24bf5ba4f83b0716b405148797c7f6bd96f85b7c45c00bd64cee0ed61e4dd0f9\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c4c0bb4318daf269624c305c842011dcc10b2df487aaa59c013dcec3c45ec10e\"" Dec 13 14:30:26.774923 env[1840]: time="2024-12-13T14:30:26.774888803Z" level=info msg="StartContainer for \"c4c0bb4318daf269624c305c842011dcc10b2df487aaa59c013dcec3c45ec10e\"" Dec 13 14:30:26.850995 env[1840]: time="2024-12-13T14:30:26.850916384Z" level=info msg="StartContainer for \"c4c0bb4318daf269624c305c842011dcc10b2df487aaa59c013dcec3c45ec10e\" returns successfully" Dec 13 14:30:27.141847 systemd-networkd[1510]: lxcac647e6dadc5: Gained IPv6LL Dec 13 14:30:27.145831 kubelet[2235]: E1213 14:30:27.145731 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:27.716781 kubelet[2235]: I1213 14:30:27.716744 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=20.399542305 podStartE2EDuration="20.716708416s" podCreationTimestamp="2024-12-13 14:30:07 +0000 UTC" firstStartedPulling="2024-12-13 14:30:26.339695937 +0000 UTC m=+71.911232826" lastFinishedPulling="2024-12-13 14:30:26.656862055 +0000 UTC m=+72.228398937" observedRunningTime="2024-12-13 14:30:27.71639983 +0000 UTC m=+73.287936731" watchObservedRunningTime="2024-12-13 14:30:27.716708416 +0000 UTC m=+73.288245316" Dec 13 14:30:28.146774 kubelet[2235]: E1213 14:30:28.146729 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:29.147208 kubelet[2235]: E1213 14:30:29.147158 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:30.147793 kubelet[2235]: E1213 14:30:30.147740 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:31.148521 kubelet[2235]: E1213 14:30:31.148469 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:32.149284 kubelet[2235]: E1213 14:30:32.149232 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:32.715379 env[1840]: time="2024-12-13T14:30:32.715287638Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:30:32.725877 env[1840]: time="2024-12-13T14:30:32.725829942Z" level=info msg="StopContainer for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" with timeout 2 (s)" Dec 13 14:30:32.726359 env[1840]: time="2024-12-13T14:30:32.726281677Z" level=info msg="Stop container \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" with signal terminated" Dec 13 14:30:32.735210 systemd-networkd[1510]: lxc_health: Link DOWN Dec 13 14:30:32.735218 systemd-networkd[1510]: lxc_health: Lost carrier Dec 13 14:30:32.907677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc-rootfs.mount: Deactivated successfully. Dec 13 14:30:32.935000 env[1840]: time="2024-12-13T14:30:32.934943487Z" level=info msg="shim disconnected" id=765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc Dec 13 14:30:32.935000 env[1840]: time="2024-12-13T14:30:32.935001225Z" level=warning msg="cleaning up after shim disconnected" id=765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc namespace=k8s.io Dec 13 14:30:32.935317 env[1840]: time="2024-12-13T14:30:32.935012967Z" level=info msg="cleaning up dead shim" Dec 13 14:30:32.948978 env[1840]: time="2024-12-13T14:30:32.948921391Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3786 runtime=io.containerd.runc.v2\n" Dec 13 14:30:32.951791 env[1840]: time="2024-12-13T14:30:32.951742361Z" level=info msg="StopContainer for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" returns successfully" Dec 13 14:30:32.952992 env[1840]: time="2024-12-13T14:30:32.952930120Z" level=info msg="StopPodSandbox for \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\"" Dec 13 14:30:32.953223 env[1840]: time="2024-12-13T14:30:32.953135354Z" level=info msg="Container to stop \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:32.953223 env[1840]: time="2024-12-13T14:30:32.953164050Z" level=info msg="Container to stop \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:32.953223 env[1840]: time="2024-12-13T14:30:32.953181672Z" level=info msg="Container to stop \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:32.953223 env[1840]: time="2024-12-13T14:30:32.953197958Z" level=info msg="Container to stop \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:32.954815 env[1840]: time="2024-12-13T14:30:32.953237986Z" level=info msg="Container to stop \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:32.956169 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991-shm.mount: Deactivated successfully. Dec 13 14:30:32.998849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991-rootfs.mount: Deactivated successfully. Dec 13 14:30:33.015526 env[1840]: time="2024-12-13T14:30:33.015474137Z" level=info msg="shim disconnected" id=2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991 Dec 13 14:30:33.015818 env[1840]: time="2024-12-13T14:30:33.015796403Z" level=warning msg="cleaning up after shim disconnected" id=2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991 namespace=k8s.io Dec 13 14:30:33.015965 env[1840]: time="2024-12-13T14:30:33.015949302Z" level=info msg="cleaning up dead shim" Dec 13 14:30:33.029198 env[1840]: time="2024-12-13T14:30:33.029143894Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3818 runtime=io.containerd.runc.v2\n" Dec 13 14:30:33.029894 env[1840]: time="2024-12-13T14:30:33.029854880Z" level=info msg="TearDown network for sandbox \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" successfully" Dec 13 14:30:33.029894 env[1840]: time="2024-12-13T14:30:33.029891464Z" level=info msg="StopPodSandbox for \"2ee961cd7f9b75fb8c1a69591aea46b26c3184f2e881d2fc6f370de39168a991\" returns successfully" Dec 13 14:30:33.096873 kubelet[2235]: I1213 14:30:33.096833 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k7gd\" (UniqueName: \"kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-kube-api-access-9k7gd\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097272 kubelet[2235]: I1213 14:30:33.096892 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cni-path\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097272 kubelet[2235]: I1213 14:30:33.096917 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-bpf-maps\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097272 kubelet[2235]: I1213 14:30:33.096947 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-hubble-tls\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097272 kubelet[2235]: I1213 14:30:33.096971 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-xtables-lock\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097272 kubelet[2235]: I1213 14:30:33.096996 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-kernel\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097272 kubelet[2235]: I1213 14:30:33.097023 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-config-path\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097645 kubelet[2235]: I1213 14:30:33.097212 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-run\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097645 kubelet[2235]: I1213 14:30:33.097253 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-hostproc\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097645 kubelet[2235]: I1213 14:30:33.097277 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-cgroup\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097645 kubelet[2235]: I1213 14:30:33.097307 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99cebb23-8059-4659-af4b-2b2ef38bf93f-clustermesh-secrets\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097645 kubelet[2235]: I1213 14:30:33.097333 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-etc-cni-netd\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097645 kubelet[2235]: I1213 14:30:33.097408 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-net\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097891 kubelet[2235]: I1213 14:30:33.097436 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-lib-modules\") pod \"99cebb23-8059-4659-af4b-2b2ef38bf93f\" (UID: \"99cebb23-8059-4659-af4b-2b2ef38bf93f\") " Dec 13 14:30:33.097891 kubelet[2235]: I1213 14:30:33.097503 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.100416 kubelet[2235]: I1213 14:30:33.100379 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:30:33.100561 kubelet[2235]: I1213 14:30:33.100447 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cni-path" (OuterVolumeSpecName: "cni-path") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.100561 kubelet[2235]: I1213 14:30:33.100476 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.103289 kubelet[2235]: I1213 14:30:33.103246 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.103511 kubelet[2235]: I1213 14:30:33.103347 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.103511 kubelet[2235]: I1213 14:30:33.103410 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.103511 kubelet[2235]: I1213 14:30:33.103441 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.103511 kubelet[2235]: I1213 14:30:33.103492 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-hostproc" (OuterVolumeSpecName: "hostproc") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.103923 kubelet[2235]: I1213 14:30:33.103523 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.104295 kubelet[2235]: I1213 14:30:33.104266 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.112375 systemd[1]: var-lib-kubelet-pods-99cebb23\x2d8059\x2d4659\x2daf4b\x2d2b2ef38bf93f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9k7gd.mount: Deactivated successfully. Dec 13 14:30:33.112745 systemd[1]: var-lib-kubelet-pods-99cebb23\x2d8059\x2d4659\x2daf4b\x2d2b2ef38bf93f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:30:33.115967 kubelet[2235]: I1213 14:30:33.115921 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:33.116100 kubelet[2235]: I1213 14:30:33.116028 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-kube-api-access-9k7gd" (OuterVolumeSpecName: "kube-api-access-9k7gd") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "kube-api-access-9k7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:33.116289 kubelet[2235]: I1213 14:30:33.116262 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99cebb23-8059-4659-af4b-2b2ef38bf93f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "99cebb23-8059-4659-af4b-2b2ef38bf93f" (UID: "99cebb23-8059-4659-af4b-2b2ef38bf93f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:30:33.150464 kubelet[2235]: E1213 14:30:33.150412 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197906 2235 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9k7gd\" (UniqueName: \"kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-kube-api-access-9k7gd\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197944 2235 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cni-path\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197955 2235 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-bpf-maps\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197965 2235 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99cebb23-8059-4659-af4b-2b2ef38bf93f-hubble-tls\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197974 2235 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-xtables-lock\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197985 2235 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99cebb23-8059-4659-af4b-2b2ef38bf93f-clustermesh-secrets\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.197994 2235 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-kernel\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198073 kubelet[2235]: I1213 14:30:33.198003 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-config-path\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198518 kubelet[2235]: I1213 14:30:33.198013 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-run\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198518 kubelet[2235]: I1213 14:30:33.198022 2235 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-hostproc\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198518 kubelet[2235]: I1213 14:30:33.198032 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-cilium-cgroup\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198518 kubelet[2235]: I1213 14:30:33.198041 2235 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-etc-cni-netd\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198518 kubelet[2235]: I1213 14:30:33.198051 2235 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-host-proc-sys-net\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.198518 kubelet[2235]: I1213 14:30:33.198061 2235 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99cebb23-8059-4659-af4b-2b2ef38bf93f-lib-modules\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:33.682047 systemd[1]: var-lib-kubelet-pods-99cebb23\x2d8059\x2d4659\x2daf4b\x2d2b2ef38bf93f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:30:33.722142 kubelet[2235]: I1213 14:30:33.721399 2235 scope.go:117] "RemoveContainer" containerID="765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc" Dec 13 14:30:33.729731 env[1840]: time="2024-12-13T14:30:33.729325774Z" level=info msg="RemoveContainer for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\"" Dec 13 14:30:33.748858 env[1840]: time="2024-12-13T14:30:33.748807569Z" level=info msg="RemoveContainer for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" returns successfully" Dec 13 14:30:33.749696 kubelet[2235]: I1213 14:30:33.749666 2235 scope.go:117] "RemoveContainer" containerID="42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8" Dec 13 14:30:33.751947 env[1840]: time="2024-12-13T14:30:33.751906130Z" level=info msg="RemoveContainer for \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\"" Dec 13 14:30:33.760924 env[1840]: time="2024-12-13T14:30:33.760769107Z" level=info msg="RemoveContainer for \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\" returns successfully" Dec 13 14:30:33.761426 kubelet[2235]: I1213 14:30:33.761394 2235 scope.go:117] "RemoveContainer" containerID="532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc" Dec 13 14:30:33.771989 env[1840]: time="2024-12-13T14:30:33.771943438Z" level=info msg="RemoveContainer for \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\"" Dec 13 14:30:33.781777 env[1840]: time="2024-12-13T14:30:33.781654640Z" level=info msg="RemoveContainer for \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\" returns successfully" Dec 13 14:30:33.781990 kubelet[2235]: I1213 14:30:33.781958 2235 scope.go:117] "RemoveContainer" containerID="370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2" Dec 13 14:30:33.790073 env[1840]: time="2024-12-13T14:30:33.790035859Z" level=info msg="RemoveContainer for \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\"" Dec 13 14:30:33.798336 env[1840]: time="2024-12-13T14:30:33.798289376Z" level=info msg="RemoveContainer for \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\" returns successfully" Dec 13 14:30:33.799071 kubelet[2235]: I1213 14:30:33.799045 2235 scope.go:117] "RemoveContainer" containerID="297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48" Dec 13 14:30:33.801302 env[1840]: time="2024-12-13T14:30:33.801260095Z" level=info msg="RemoveContainer for \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\"" Dec 13 14:30:33.805885 env[1840]: time="2024-12-13T14:30:33.805841236Z" level=info msg="RemoveContainer for \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\" returns successfully" Dec 13 14:30:33.806238 kubelet[2235]: I1213 14:30:33.806200 2235 scope.go:117] "RemoveContainer" containerID="765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc" Dec 13 14:30:33.806680 env[1840]: time="2024-12-13T14:30:33.806605826Z" level=error msg="ContainerStatus for \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\": not found" Dec 13 14:30:33.806968 kubelet[2235]: E1213 14:30:33.806941 2235 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\": not found" containerID="765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc" Dec 13 14:30:33.807184 kubelet[2235]: I1213 14:30:33.807164 2235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc"} err="failed to get container status \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"765b93330b1d3728a55d361f8f311d34afc641e91804048c7a0a4a8c36d070fc\": not found" Dec 13 14:30:33.807930 kubelet[2235]: I1213 14:30:33.807189 2235 scope.go:117] "RemoveContainer" containerID="42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8" Dec 13 14:30:33.812070 env[1840]: time="2024-12-13T14:30:33.808263449Z" level=error msg="ContainerStatus for \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\": not found" Dec 13 14:30:33.812352 kubelet[2235]: E1213 14:30:33.812275 2235 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\": not found" containerID="42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8" Dec 13 14:30:33.812466 kubelet[2235]: I1213 14:30:33.812385 2235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8"} err="failed to get container status \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"42ff20a940ee09d25d8c66c6572f34c4dadff4c7a41b3bc0e6a2215af64213a8\": not found" Dec 13 14:30:33.812466 kubelet[2235]: I1213 14:30:33.812409 2235 scope.go:117] "RemoveContainer" containerID="532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc" Dec 13 14:30:33.813161 env[1840]: time="2024-12-13T14:30:33.813029089Z" level=error msg="ContainerStatus for \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\": not found" Dec 13 14:30:33.813302 kubelet[2235]: E1213 14:30:33.813280 2235 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\": not found" containerID="532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc" Dec 13 14:30:33.813382 kubelet[2235]: I1213 14:30:33.813321 2235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc"} err="failed to get container status \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\": rpc error: code = NotFound desc = an error occurred when try to find container \"532566aa14cd71a229bbf23dd10e03a19290777af01647e099958439f72dcebc\": not found" Dec 13 14:30:33.813382 kubelet[2235]: I1213 14:30:33.813341 2235 scope.go:117] "RemoveContainer" containerID="370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2" Dec 13 14:30:33.814161 env[1840]: time="2024-12-13T14:30:33.813954224Z" level=error msg="ContainerStatus for \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\": not found" Dec 13 14:30:33.816481 kubelet[2235]: E1213 14:30:33.816443 2235 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\": not found" containerID="370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2" Dec 13 14:30:33.816603 kubelet[2235]: I1213 14:30:33.816508 2235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2"} err="failed to get container status \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"370edea39eb07c11c06cf85453ef0a9542e1dadf467f335e61eee9d3414f98b2\": not found" Dec 13 14:30:33.816603 kubelet[2235]: I1213 14:30:33.816528 2235 scope.go:117] "RemoveContainer" containerID="297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48" Dec 13 14:30:33.817020 env[1840]: time="2024-12-13T14:30:33.816928051Z" level=error msg="ContainerStatus for \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\": not found" Dec 13 14:30:33.817237 kubelet[2235]: E1213 14:30:33.817214 2235 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\": not found" containerID="297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48" Dec 13 14:30:33.817324 kubelet[2235]: I1213 14:30:33.817256 2235 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48"} err="failed to get container status \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\": rpc error: code = NotFound desc = an error occurred when try to find container \"297ac0981cfcc42d2a4399e87c264cd047ccddff7f75e52f168eba530c18ba48\": not found" Dec 13 14:30:34.150982 kubelet[2235]: E1213 14:30:34.150930 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:35.077977 kubelet[2235]: E1213 14:30:35.077920 2235 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:35.151492 kubelet[2235]: E1213 14:30:35.151435 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:35.250771 kubelet[2235]: E1213 14:30:35.250736 2235 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:30:35.421026 kubelet[2235]: I1213 14:30:35.420923 2235 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" path="/var/lib/kubelet/pods/99cebb23-8059-4659-af4b-2b2ef38bf93f/volumes" Dec 13 14:30:36.077738 kubelet[2235]: I1213 14:30:36.077695 2235 topology_manager.go:215] "Topology Admit Handler" podUID="213f4bf4-907d-4b3a-a7fe-4ba2d411a407" podNamespace="kube-system" podName="cilium-operator-5cc964979-t7cqn" Dec 13 14:30:36.077948 kubelet[2235]: E1213 14:30:36.077771 2235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" containerName="mount-cgroup" Dec 13 14:30:36.077948 kubelet[2235]: E1213 14:30:36.077786 2235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" containerName="apply-sysctl-overwrites" Dec 13 14:30:36.077948 kubelet[2235]: E1213 14:30:36.077797 2235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" containerName="mount-bpf-fs" Dec 13 14:30:36.077948 kubelet[2235]: E1213 14:30:36.077805 2235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" containerName="clean-cilium-state" Dec 13 14:30:36.077948 kubelet[2235]: E1213 14:30:36.077815 2235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" containerName="cilium-agent" Dec 13 14:30:36.077948 kubelet[2235]: I1213 14:30:36.077837 2235 memory_manager.go:354] "RemoveStaleState removing state" podUID="99cebb23-8059-4659-af4b-2b2ef38bf93f" containerName="cilium-agent" Dec 13 14:30:36.090928 kubelet[2235]: I1213 14:30:36.090900 2235 topology_manager.go:215] "Topology Admit Handler" podUID="dc59b550-ae6f-48d3-aaf9-74bd414a83a5" podNamespace="kube-system" podName="cilium-npgd9" Dec 13 14:30:36.116044 kubelet[2235]: I1213 14:30:36.116007 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phsg\" (UniqueName: \"kubernetes.io/projected/213f4bf4-907d-4b3a-a7fe-4ba2d411a407-kube-api-access-4phsg\") pod \"cilium-operator-5cc964979-t7cqn\" (UID: \"213f4bf4-907d-4b3a-a7fe-4ba2d411a407\") " pod="kube-system/cilium-operator-5cc964979-t7cqn" Dec 13 14:30:36.116223 kubelet[2235]: I1213 14:30:36.116066 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/213f4bf4-907d-4b3a-a7fe-4ba2d411a407-cilium-config-path\") pod \"cilium-operator-5cc964979-t7cqn\" (UID: \"213f4bf4-907d-4b3a-a7fe-4ba2d411a407\") " pod="kube-system/cilium-operator-5cc964979-t7cqn" Dec 13 14:30:36.152430 kubelet[2235]: E1213 14:30:36.152386 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:36.216762 kubelet[2235]: I1213 14:30:36.216714 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-lib-modules\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.216762 kubelet[2235]: I1213 14:30:36.216769 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-net\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217116 kubelet[2235]: I1213 14:30:36.216798 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-kernel\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217116 kubelet[2235]: I1213 14:30:36.216823 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-etc-cni-netd\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217116 kubelet[2235]: I1213 14:30:36.216899 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-xtables-lock\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217116 kubelet[2235]: I1213 14:30:36.217015 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2fs\" (UniqueName: \"kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-kube-api-access-6f2fs\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217116 kubelet[2235]: I1213 14:30:36.217049 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-bpf-maps\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217116 kubelet[2235]: I1213 14:30:36.217077 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hostproc\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217705 kubelet[2235]: I1213 14:30:36.217111 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-cgroup\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217705 kubelet[2235]: I1213 14:30:36.217139 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cni-path\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217705 kubelet[2235]: I1213 14:30:36.217435 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-run\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217705 kubelet[2235]: I1213 14:30:36.217499 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-config-path\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217705 kubelet[2235]: I1213 14:30:36.217530 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-ipsec-secrets\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.217705 kubelet[2235]: I1213 14:30:36.217562 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hubble-tls\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.219253 kubelet[2235]: I1213 14:30:36.217615 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-clustermesh-secrets\") pod \"cilium-npgd9\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " pod="kube-system/cilium-npgd9" Dec 13 14:30:36.393583 env[1840]: time="2024-12-13T14:30:36.392542575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-t7cqn,Uid:213f4bf4-907d-4b3a-a7fe-4ba2d411a407,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:36.396555 env[1840]: time="2024-12-13T14:30:36.396519775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npgd9,Uid:dc59b550-ae6f-48d3-aaf9-74bd414a83a5,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:36.441370 env[1840]: time="2024-12-13T14:30:36.441288666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:36.441643 env[1840]: time="2024-12-13T14:30:36.441581002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:36.441791 env[1840]: time="2024-12-13T14:30:36.441766140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:36.442122 env[1840]: time="2024-12-13T14:30:36.442087081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca48c5aa9ecc3bd15efaa57323464a954ac6a138d427167c318026b22e85bd4d pid=3847 runtime=io.containerd.runc.v2 Dec 13 14:30:36.447441 env[1840]: time="2024-12-13T14:30:36.447355042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:36.447732 env[1840]: time="2024-12-13T14:30:36.447683157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:36.447921 env[1840]: time="2024-12-13T14:30:36.447849696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:36.448337 env[1840]: time="2024-12-13T14:30:36.448261348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624 pid=3862 runtime=io.containerd.runc.v2 Dec 13 14:30:36.519981 env[1840]: time="2024-12-13T14:30:36.519813177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-npgd9,Uid:dc59b550-ae6f-48d3-aaf9-74bd414a83a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624\"" Dec 13 14:30:36.526106 env[1840]: time="2024-12-13T14:30:36.526059346Z" level=info msg="CreateContainer within sandbox \"e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:30:36.555913 env[1840]: time="2024-12-13T14:30:36.555867672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-t7cqn,Uid:213f4bf4-907d-4b3a-a7fe-4ba2d411a407,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca48c5aa9ecc3bd15efaa57323464a954ac6a138d427167c318026b22e85bd4d\"" Dec 13 14:30:36.558040 env[1840]: time="2024-12-13T14:30:36.558002062Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:30:36.658748 env[1840]: time="2024-12-13T14:30:36.658033909Z" level=info msg="CreateContainer within sandbox \"e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0\"" Dec 13 14:30:36.659420 env[1840]: time="2024-12-13T14:30:36.659139318Z" level=info msg="StartContainer for \"2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0\"" Dec 13 14:30:36.733003 env[1840]: time="2024-12-13T14:30:36.732896507Z" level=info msg="StartContainer for \"2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0\" returns successfully" Dec 13 14:30:36.851726 env[1840]: time="2024-12-13T14:30:36.851630737Z" level=info msg="shim disconnected" id=2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0 Dec 13 14:30:36.852011 env[1840]: time="2024-12-13T14:30:36.851730662Z" level=warning msg="cleaning up after shim disconnected" id=2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0 namespace=k8s.io Dec 13 14:30:36.852011 env[1840]: time="2024-12-13T14:30:36.851748659Z" level=info msg="cleaning up dead shim" Dec 13 14:30:36.863004 env[1840]: time="2024-12-13T14:30:36.862950051Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3976 runtime=io.containerd.runc.v2\n" Dec 13 14:30:37.153601 kubelet[2235]: E1213 14:30:37.153536 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:37.209626 kubelet[2235]: I1213 14:30:37.208691 2235 setters.go:568] "Node became not ready" node="172.31.20.184" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:30:37Z","lastTransitionTime":"2024-12-13T14:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:30:37.744567 env[1840]: time="2024-12-13T14:30:37.743265537Z" level=info msg="StopPodSandbox for \"e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624\"" Dec 13 14:30:37.753096 env[1840]: time="2024-12-13T14:30:37.745043103Z" level=info msg="Container to stop \"2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:37.752702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624-shm.mount: Deactivated successfully. Dec 13 14:30:37.826661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624-rootfs.mount: Deactivated successfully. Dec 13 14:30:37.867531 env[1840]: time="2024-12-13T14:30:37.867478944Z" level=info msg="shim disconnected" id=e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624 Dec 13 14:30:37.867967 env[1840]: time="2024-12-13T14:30:37.867940860Z" level=warning msg="cleaning up after shim disconnected" id=e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624 namespace=k8s.io Dec 13 14:30:37.868098 env[1840]: time="2024-12-13T14:30:37.868081043Z" level=info msg="cleaning up dead shim" Dec 13 14:30:37.884687 env[1840]: time="2024-12-13T14:30:37.884632492Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" Dec 13 14:30:37.885577 env[1840]: time="2024-12-13T14:30:37.885511013Z" level=info msg="TearDown network for sandbox \"e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624\" successfully" Dec 13 14:30:37.885577 env[1840]: time="2024-12-13T14:30:37.885572991Z" level=info msg="StopPodSandbox for \"e7199b57f0d8088a50311c7fa379a52d9fb7cdde57fff493f8121850b3953624\" returns successfully" Dec 13 14:30:37.930065 kubelet[2235]: I1213 14:30:37.930028 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-xtables-lock\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930065 kubelet[2235]: I1213 14:30:37.930074 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-bpf-maps\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930454 kubelet[2235]: I1213 14:30:37.930232 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-ipsec-secrets\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930454 kubelet[2235]: I1213 14:30:37.930267 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hubble-tls\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930454 kubelet[2235]: I1213 14:30:37.930294 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-clustermesh-secrets\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930454 kubelet[2235]: I1213 14:30:37.930317 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-net\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930454 kubelet[2235]: I1213 14:30:37.930345 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-etc-cni-netd\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930454 kubelet[2235]: I1213 14:30:37.930371 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-cgroup\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930748 kubelet[2235]: I1213 14:30:37.930397 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-run\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930748 kubelet[2235]: I1213 14:30:37.930427 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-lib-modules\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930748 kubelet[2235]: I1213 14:30:37.930455 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-kernel\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930748 kubelet[2235]: I1213 14:30:37.930486 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f2fs\" (UniqueName: \"kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-kube-api-access-6f2fs\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930748 kubelet[2235]: I1213 14:30:37.930517 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-config-path\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930748 kubelet[2235]: I1213 14:30:37.930545 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cni-path\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930993 kubelet[2235]: I1213 14:30:37.930571 2235 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hostproc\") pod \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\" (UID: \"dc59b550-ae6f-48d3-aaf9-74bd414a83a5\") " Dec 13 14:30:37.930993 kubelet[2235]: I1213 14:30:37.930663 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.930993 kubelet[2235]: I1213 14:30:37.930698 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.930993 kubelet[2235]: I1213 14:30:37.930720 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.931245 kubelet[2235]: I1213 14:30:37.931219 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.931449 kubelet[2235]: I1213 14:30:37.931430 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.931761 kubelet[2235]: I1213 14:30:37.931567 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.936125 kubelet[2235]: I1213 14:30:37.935991 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.939208 kubelet[2235]: I1213 14:30:37.939162 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:30:37.945649 systemd[1]: var-lib-kubelet-pods-dc59b550\x2dae6f\x2d48d3\x2daaf9\x2d74bd414a83a5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:30:37.946365 kubelet[2235]: I1213 14:30:37.946327 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.946527 kubelet[2235]: I1213 14:30:37.946510 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.946659 kubelet[2235]: I1213 14:30:37.946644 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.954747 kubelet[2235]: I1213 14:30:37.954692 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:30:37.966882 systemd[1]: var-lib-kubelet-pods-dc59b550\x2dae6f\x2d48d3\x2daaf9\x2d74bd414a83a5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:30:37.969349 kubelet[2235]: I1213 14:30:37.968893 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-kube-api-access-6f2fs" (OuterVolumeSpecName: "kube-api-access-6f2fs") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "kube-api-access-6f2fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:37.969518 kubelet[2235]: I1213 14:30:37.969490 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:37.969897 kubelet[2235]: I1213 14:30:37.969861 2235 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc59b550-ae6f-48d3-aaf9-74bd414a83a5" (UID: "dc59b550-ae6f-48d3-aaf9-74bd414a83a5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031534 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-run\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031573 2235 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-clustermesh-secrets\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031603 2235 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-net\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031619 2235 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-etc-cni-netd\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031632 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-cgroup\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031646 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-config-path\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031659 2235 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-lib-modules\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.031838 kubelet[2235]: I1213 14:30:38.031673 2235 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-host-proc-sys-kernel\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031694 2235 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6f2fs\" (UniqueName: \"kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-kube-api-access-6f2fs\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031707 2235 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cni-path\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031721 2235 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hostproc\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031734 2235 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-xtables-lock\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031750 2235 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-bpf-maps\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031765 2235 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-cilium-ipsec-secrets\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.032396 kubelet[2235]: I1213 14:30:38.031785 2235 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc59b550-ae6f-48d3-aaf9-74bd414a83a5-hubble-tls\") on node \"172.31.20.184\" DevicePath \"\"" Dec 13 14:30:38.100903 env[1840]: time="2024-12-13T14:30:38.100805161Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T143038Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=36b30809deec9bfada1cba91e921e3752ead30c4777ccd75111c014a264ab712&cf_sign=MZx8PBHAl8hRFaw%2BE66hgVFj0b2guOa74Yw6VtOSD8F6gwlLkxoHqag4%2FmMrHI3cLIx63FbGJFs7iZiFB4sOJzGzrQrtKPHfH0pK8w0krN89cQFNOoVqUJEWnCWSkfLDnBPflfjGJuykCyQIfqcFvi30deFd97NaAff6r6fM0Sg9jJX2ZpXGBGT0oSoAvtQH%2FWuP5eQLWwEOSdAL2dcCSniGXbRGRu9P2BCdUFbitZhmTtnN5kT8iPHFMuMBXfSmaF6RU3AA7BIt6CbDMKGtk6CnhoIdFBhdmKDP5gKf67PqVgMbwcVBSMvodA6GsrnHQ3s4PB3sd%2FK2gh5F9f%2FQ8g%3D%3D&cf_expiry=1734100838®ion=us-east-1&namespace=cilium&repo_name=operator-generic\": dial tcp: lookup cdn03.quay.io: no such host" Dec 13 14:30:38.101662 kubelet[2235]: E1213 14:30:38.101539 2235 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T143038Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=36b30809deec9bfada1cba91e921e3752ead30c4777ccd75111c014a264ab712&cf_sign=MZx8PBHAl8hRFaw%2BE66hgVFj0b2guOa74Yw6VtOSD8F6gwlLkxoHqag4%2FmMrHI3cLIx63FbGJFs7iZiFB4sOJzGzrQrtKPHfH0pK8w0krN89cQFNOoVqUJEWnCWSkfLDnBPflfjGJuykCyQIfqcFvi30deFd97NaAff6r6fM0Sg9jJX2ZpXGBGT0oSoAvtQH%2FWuP5eQLWwEOSdAL2dcCSniGXbRGRu9P2BCdUFbitZhmTtnN5kT8iPHFMuMBXfSmaF6RU3AA7BIt6CbDMKGtk6CnhoIdFBhdmKDP5gKf67PqVgMbwcVBSMvodA6GsrnHQ3s4PB3sd%2FK2gh5F9f%2FQ8g%3D%3D&cf_expiry=1734100838®ion=us-east-1&namespace=cilium&repo_name=operator-generic\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Dec 13 14:30:38.101964 kubelet[2235]: E1213 14:30:38.101717 2235 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T143038Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=36b30809deec9bfada1cba91e921e3752ead30c4777ccd75111c014a264ab712&cf_sign=MZx8PBHAl8hRFaw%2BE66hgVFj0b2guOa74Yw6VtOSD8F6gwlLkxoHqag4%2FmMrHI3cLIx63FbGJFs7iZiFB4sOJzGzrQrtKPHfH0pK8w0krN89cQFNOoVqUJEWnCWSkfLDnBPflfjGJuykCyQIfqcFvi30deFd97NaAff6r6fM0Sg9jJX2ZpXGBGT0oSoAvtQH%2FWuP5eQLWwEOSdAL2dcCSniGXbRGRu9P2BCdUFbitZhmTtnN5kT8iPHFMuMBXfSmaF6RU3AA7BIt6CbDMKGtk6CnhoIdFBhdmKDP5gKf67PqVgMbwcVBSMvodA6GsrnHQ3s4PB3sd%2FK2gh5F9f%2FQ8g%3D%3D&cf_expiry=1734100838®ion=us-east-1&namespace=cilium&repo_name=operator-generic\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Dec 13 14:30:38.102161 kubelet[2235]: E1213 14:30:38.102115 2235 kuberuntime_manager.go:1262] container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4phsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-5cc964979-t7cqn_kube-system(213f4bf4-907d-4b3a-a7fe-4ba2d411a407): ErrImagePull: failed to pull and unpack image "quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e": failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T143038Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=36b30809deec9bfada1cba91e921e3752ead30c4777ccd75111c014a264ab712&cf_sign=MZx8PBHAl8hRFaw%2BE66hgVFj0b2guOa74Yw6VtOSD8F6gwlLkxoHqag4%2FmMrHI3cLIx63FbGJFs7iZiFB4sOJzGzrQrtKPHfH0pK8w0krN89cQFNOoVqUJEWnCWSkfLDnBPflfjGJuykCyQIfqcFvi30deFd97NaAff6r6fM0Sg9jJX2ZpXGBGT0oSoAvtQH%2FWuP5eQLWwEOSdAL2dcCSniGXbRGRu9P2BCdUFbitZhmTtnN5kT8iPHFMuMBXfSmaF6RU3AA7BIt6CbDMKGtk6CnhoIdFBhdmKDP5gKf67PqVgMbwcVBSMvodA6GsrnHQ3s4PB3sd%2FK2gh5F9f%2FQ8g%3D%3D&cf_expiry=1734100838®ion=us-east-1&namespace=cilium&repo_name=operator-generic": dial tcp: lookup cdn03.quay.io: no such host Dec 13 14:30:38.102380 kubelet[2235]: E1213 14:30:38.102214 2235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T143038Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=36b30809deec9bfada1cba91e921e3752ead30c4777ccd75111c014a264ab712&cf_sign=MZx8PBHAl8hRFaw%2BE66hgVFj0b2guOa74Yw6VtOSD8F6gwlLkxoHqag4%2FmMrHI3cLIx63FbGJFs7iZiFB4sOJzGzrQrtKPHfH0pK8w0krN89cQFNOoVqUJEWnCWSkfLDnBPflfjGJuykCyQIfqcFvi30deFd97NaAff6r6fM0Sg9jJX2ZpXGBGT0oSoAvtQH%2FWuP5eQLWwEOSdAL2dcCSniGXbRGRu9P2BCdUFbitZhmTtnN5kT8iPHFMuMBXfSmaF6RU3AA7BIt6CbDMKGtk6CnhoIdFBhdmKDP5gKf67PqVgMbwcVBSMvodA6GsrnHQ3s4PB3sd%2FK2gh5F9f%2FQ8g%3D%3D&cf_expiry=1734100838®ion=us-east-1&namespace=cilium&repo_name=operator-generic\\\": dial tcp: lookup cdn03.quay.io: no such host\"" pod="kube-system/cilium-operator-5cc964979-t7cqn" podUID="213f4bf4-907d-4b3a-a7fe-4ba2d411a407" Dec 13 14:30:38.154797 kubelet[2235]: E1213 14:30:38.154753 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:38.241037 systemd[1]: var-lib-kubelet-pods-dc59b550\x2dae6f\x2d48d3\x2daaf9\x2d74bd414a83a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6f2fs.mount: Deactivated successfully. Dec 13 14:30:38.241218 systemd[1]: var-lib-kubelet-pods-dc59b550\x2dae6f\x2d48d3\x2daaf9\x2d74bd414a83a5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:30:38.750444 kubelet[2235]: I1213 14:30:38.750417 2235 scope.go:117] "RemoveContainer" containerID="2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0" Dec 13 14:30:38.752704 kubelet[2235]: E1213 14:30:38.752449 2235 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"\"" pod="kube-system/cilium-operator-5cc964979-t7cqn" podUID="213f4bf4-907d-4b3a-a7fe-4ba2d411a407" Dec 13 14:30:38.755636 env[1840]: time="2024-12-13T14:30:38.755274325Z" level=info msg="RemoveContainer for \"2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0\"" Dec 13 14:30:38.760124 env[1840]: time="2024-12-13T14:30:38.760078352Z" level=info msg="RemoveContainer for \"2c3de55f91a2472a824a987ef69dfb3ef7feaaee1322c613b6b0904e79a7ffe0\" returns successfully" Dec 13 14:30:38.896021 kubelet[2235]: I1213 14:30:38.895982 2235 topology_manager.go:215] "Topology Admit Handler" podUID="5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34" podNamespace="kube-system" podName="cilium-852qs" Dec 13 14:30:38.896310 kubelet[2235]: E1213 14:30:38.896076 2235 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc59b550-ae6f-48d3-aaf9-74bd414a83a5" containerName="mount-cgroup" Dec 13 14:30:38.896310 kubelet[2235]: I1213 14:30:38.896108 2235 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc59b550-ae6f-48d3-aaf9-74bd414a83a5" containerName="mount-cgroup" Dec 13 14:30:38.939341 kubelet[2235]: I1213 14:30:38.939287 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-hubble-tls\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939341 kubelet[2235]: I1213 14:30:38.939348 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-cilium-ipsec-secrets\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939379 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-cni-path\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939402 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-lib-modules\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939428 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-host-proc-sys-kernel\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939456 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-cilium-run\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939482 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-bpf-maps\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939508 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-hostproc\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939533 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-cilium-cgroup\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939578 kubelet[2235]: I1213 14:30:38.939564 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-etc-cni-netd\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939997 kubelet[2235]: I1213 14:30:38.939605 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-cilium-config-path\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939997 kubelet[2235]: I1213 14:30:38.939638 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhzr\" (UniqueName: \"kubernetes.io/projected/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-kube-api-access-mxhzr\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939997 kubelet[2235]: I1213 14:30:38.939671 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-host-proc-sys-net\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939997 kubelet[2235]: I1213 14:30:38.939713 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-xtables-lock\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:38.939997 kubelet[2235]: I1213 14:30:38.939743 2235 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34-clustermesh-secrets\") pod \"cilium-852qs\" (UID: \"5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34\") " pod="kube-system/cilium-852qs" Dec 13 14:30:39.155521 kubelet[2235]: E1213 14:30:39.155477 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:39.200433 env[1840]: time="2024-12-13T14:30:39.200381207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-852qs,Uid:5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:39.237788 env[1840]: time="2024-12-13T14:30:39.237697519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:39.237788 env[1840]: time="2024-12-13T14:30:39.237751884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:39.237788 env[1840]: time="2024-12-13T14:30:39.237768973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:39.238390 env[1840]: time="2024-12-13T14:30:39.238341853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c pid=4039 runtime=io.containerd.runc.v2 Dec 13 14:30:39.314578 env[1840]: time="2024-12-13T14:30:39.314525783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-852qs,Uid:5d1c9ea9-5d8c-4063-a41d-fed13c3b2c34,Namespace:kube-system,Attempt:0,} returns sandbox id \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\"" Dec 13 14:30:39.318242 env[1840]: time="2024-12-13T14:30:39.318207439Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:30:39.341558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296416454.mount: Deactivated successfully. Dec 13 14:30:39.360154 env[1840]: time="2024-12-13T14:30:39.360099118Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b48d50f51f94f962843e695c24ccb18562d9ab6aa15151b3c211e54c17f7a017\"" Dec 13 14:30:39.360933 env[1840]: time="2024-12-13T14:30:39.360897511Z" level=info msg="StartContainer for \"b48d50f51f94f962843e695c24ccb18562d9ab6aa15151b3c211e54c17f7a017\"" Dec 13 14:30:39.425436 kubelet[2235]: I1213 14:30:39.422703 2235 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dc59b550-ae6f-48d3-aaf9-74bd414a83a5" path="/var/lib/kubelet/pods/dc59b550-ae6f-48d3-aaf9-74bd414a83a5/volumes" Dec 13 14:30:39.431508 env[1840]: time="2024-12-13T14:30:39.431382367Z" level=info msg="StartContainer for \"b48d50f51f94f962843e695c24ccb18562d9ab6aa15151b3c211e54c17f7a017\" returns successfully" Dec 13 14:30:39.491706 env[1840]: time="2024-12-13T14:30:39.491654923Z" level=info msg="shim disconnected" id=b48d50f51f94f962843e695c24ccb18562d9ab6aa15151b3c211e54c17f7a017 Dec 13 14:30:39.491706 env[1840]: time="2024-12-13T14:30:39.491705776Z" level=warning msg="cleaning up after shim disconnected" id=b48d50f51f94f962843e695c24ccb18562d9ab6aa15151b3c211e54c17f7a017 namespace=k8s.io Dec 13 14:30:39.491706 env[1840]: time="2024-12-13T14:30:39.491720134Z" level=info msg="cleaning up dead shim" Dec 13 14:30:39.501741 env[1840]: time="2024-12-13T14:30:39.501690576Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Dec 13 14:30:39.767712 env[1840]: time="2024-12-13T14:30:39.767537202Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:30:39.789753 env[1840]: time="2024-12-13T14:30:39.789702145Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba46690627a17d67d4170e68ccb117b688ba7aa23a64a3ea7bfc21370b4fd3cc\"" Dec 13 14:30:39.791065 env[1840]: time="2024-12-13T14:30:39.791015101Z" level=info msg="StartContainer for \"ba46690627a17d67d4170e68ccb117b688ba7aa23a64a3ea7bfc21370b4fd3cc\"" Dec 13 14:30:39.899580 env[1840]: time="2024-12-13T14:30:39.899530213Z" level=info msg="StartContainer for \"ba46690627a17d67d4170e68ccb117b688ba7aa23a64a3ea7bfc21370b4fd3cc\" returns successfully" Dec 13 14:30:39.955610 env[1840]: time="2024-12-13T14:30:39.955548787Z" level=info msg="shim disconnected" id=ba46690627a17d67d4170e68ccb117b688ba7aa23a64a3ea7bfc21370b4fd3cc Dec 13 14:30:39.955853 env[1840]: time="2024-12-13T14:30:39.955760783Z" level=warning msg="cleaning up after shim disconnected" id=ba46690627a17d67d4170e68ccb117b688ba7aa23a64a3ea7bfc21370b4fd3cc namespace=k8s.io Dec 13 14:30:39.955853 env[1840]: time="2024-12-13T14:30:39.955781642Z" level=info msg="cleaning up dead shim" Dec 13 14:30:39.965446 env[1840]: time="2024-12-13T14:30:39.965348489Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4190 runtime=io.containerd.runc.v2\n" Dec 13 14:30:40.156276 kubelet[2235]: E1213 14:30:40.156223 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:40.242422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491609299.mount: Deactivated successfully. Dec 13 14:30:40.252413 kubelet[2235]: E1213 14:30:40.252377 2235 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:30:40.761339 env[1840]: time="2024-12-13T14:30:40.761280682Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:30:40.781803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752294268.mount: Deactivated successfully. Dec 13 14:30:40.850008 env[1840]: time="2024-12-13T14:30:40.849917718Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d9ff32d2fbfe0f788c79d4e68969481cddde2b08ec84ad2094748d94c42779a8\"" Dec 13 14:30:40.851482 env[1840]: time="2024-12-13T14:30:40.851445741Z" level=info msg="StartContainer for \"d9ff32d2fbfe0f788c79d4e68969481cddde2b08ec84ad2094748d94c42779a8\"" Dec 13 14:30:40.965437 env[1840]: time="2024-12-13T14:30:40.963977148Z" level=info msg="StartContainer for \"d9ff32d2fbfe0f788c79d4e68969481cddde2b08ec84ad2094748d94c42779a8\" returns successfully" Dec 13 14:30:41.008202 env[1840]: time="2024-12-13T14:30:41.008142284Z" level=info msg="shim disconnected" id=d9ff32d2fbfe0f788c79d4e68969481cddde2b08ec84ad2094748d94c42779a8 Dec 13 14:30:41.008202 env[1840]: time="2024-12-13T14:30:41.008206553Z" level=warning msg="cleaning up after shim disconnected" id=d9ff32d2fbfe0f788c79d4e68969481cddde2b08ec84ad2094748d94c42779a8 namespace=k8s.io Dec 13 14:30:41.008728 env[1840]: time="2024-12-13T14:30:41.008219668Z" level=info msg="cleaning up dead shim" Dec 13 14:30:41.022453 env[1840]: time="2024-12-13T14:30:41.021562902Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4248 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:30:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 14:30:41.157231 kubelet[2235]: E1213 14:30:41.157180 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:41.244455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9ff32d2fbfe0f788c79d4e68969481cddde2b08ec84ad2094748d94c42779a8-rootfs.mount: Deactivated successfully. Dec 13 14:30:41.766125 env[1840]: time="2024-12-13T14:30:41.766085316Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:30:41.794159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount745388466.mount: Deactivated successfully. Dec 13 14:30:41.813819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968564379.mount: Deactivated successfully. Dec 13 14:30:41.824755 env[1840]: time="2024-12-13T14:30:41.824700859Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"175329c730a64f0d4c8afa5284139549ff26abc6a7cfbe0c579f291267564577\"" Dec 13 14:30:41.828445 env[1840]: time="2024-12-13T14:30:41.828409875Z" level=info msg="StartContainer for \"175329c730a64f0d4c8afa5284139549ff26abc6a7cfbe0c579f291267564577\"" Dec 13 14:30:41.908758 env[1840]: time="2024-12-13T14:30:41.908704573Z" level=info msg="StartContainer for \"175329c730a64f0d4c8afa5284139549ff26abc6a7cfbe0c579f291267564577\" returns successfully" Dec 13 14:30:41.937312 env[1840]: time="2024-12-13T14:30:41.937263966Z" level=info msg="shim disconnected" id=175329c730a64f0d4c8afa5284139549ff26abc6a7cfbe0c579f291267564577 Dec 13 14:30:41.937716 env[1840]: time="2024-12-13T14:30:41.937585266Z" level=warning msg="cleaning up after shim disconnected" id=175329c730a64f0d4c8afa5284139549ff26abc6a7cfbe0c579f291267564577 namespace=k8s.io Dec 13 14:30:41.937716 env[1840]: time="2024-12-13T14:30:41.937714447Z" level=info msg="cleaning up dead shim" Dec 13 14:30:41.949326 env[1840]: time="2024-12-13T14:30:41.949270920Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n" Dec 13 14:30:42.158097 kubelet[2235]: E1213 14:30:42.158044 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:42.770581 env[1840]: time="2024-12-13T14:30:42.770537178Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:30:42.793389 env[1840]: time="2024-12-13T14:30:42.793338646Z" level=info msg="CreateContainer within sandbox \"6636b20d5ba058878eb7e0aad69728658335ab619813412e1215808dcfb6274c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c\"" Dec 13 14:30:42.794212 env[1840]: time="2024-12-13T14:30:42.794178661Z" level=info msg="StartContainer for \"3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c\"" Dec 13 14:30:42.875673 env[1840]: time="2024-12-13T14:30:42.875611747Z" level=info msg="StartContainer for \"3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c\" returns successfully" Dec 13 14:30:43.159144 kubelet[2235]: E1213 14:30:43.159099 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:43.246192 systemd[1]: run-containerd-runc-k8s.io-3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c-runc.JUpijA.mount: Deactivated successfully. Dec 13 14:30:43.426621 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:30:44.159753 kubelet[2235]: E1213 14:30:44.159714 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:45.160913 kubelet[2235]: E1213 14:30:45.160872 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:45.421636 systemd[1]: run-containerd-runc-k8s.io-3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c-runc.ZDDaxg.mount: Deactivated successfully. Dec 13 14:30:46.161985 kubelet[2235]: E1213 14:30:46.161940 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:46.485410 systemd-networkd[1510]: lxc_health: Link UP Dec 13 14:30:46.490399 (udev-worker)[4876]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:46.493088 systemd-networkd[1510]: lxc_health: Gained carrier Dec 13 14:30:46.493964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:30:47.163003 kubelet[2235]: E1213 14:30:47.162957 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:47.235021 kubelet[2235]: I1213 14:30:47.234969 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-852qs" podStartSLOduration=9.234917852 podStartE2EDuration="9.234917852s" podCreationTimestamp="2024-12-13 14:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:43.806207688 +0000 UTC m=+89.377744590" watchObservedRunningTime="2024-12-13 14:30:47.234917852 +0000 UTC m=+92.806454768" Dec 13 14:30:47.805847 systemd[1]: run-containerd-runc-k8s.io-3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c-runc.nMoD4G.mount: Deactivated successfully. Dec 13 14:30:48.112733 systemd-networkd[1510]: lxc_health: Gained IPv6LL Dec 13 14:30:48.171778 kubelet[2235]: E1213 14:30:48.171691 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:49.172529 kubelet[2235]: E1213 14:30:49.172481 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:50.085680 systemd[1]: run-containerd-runc-k8s.io-3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c-runc.wvr8hN.mount: Deactivated successfully. Dec 13 14:30:50.173654 kubelet[2235]: E1213 14:30:50.173574 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:50.422331 env[1840]: time="2024-12-13T14:30:50.420811622Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:30:51.174467 kubelet[2235]: E1213 14:30:51.174402 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:52.175038 kubelet[2235]: E1213 14:30:52.174998 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:53.176801 kubelet[2235]: E1213 14:30:53.176754 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:54.177836 kubelet[2235]: E1213 14:30:54.177791 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:55.078204 kubelet[2235]: E1213 14:30:55.078159 2235 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:55.178839 kubelet[2235]: E1213 14:30:55.178791 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:56.179503 kubelet[2235]: E1213 14:30:56.179447 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:57.180447 kubelet[2235]: E1213 14:30:57.180398 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:58.181035 kubelet[2235]: E1213 14:30:58.180851 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:58.664560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297277587.mount: Deactivated successfully. Dec 13 14:30:59.181607 kubelet[2235]: E1213 14:30:59.181521 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:30:59.712238 env[1840]: time="2024-12-13T14:30:59.712182663Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.715725 env[1840]: time="2024-12-13T14:30:59.715679054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.719219 env[1840]: time="2024-12-13T14:30:59.719098915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:30:59.720041 env[1840]: time="2024-12-13T14:30:59.719998922Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:30:59.722848 env[1840]: time="2024-12-13T14:30:59.722814130Z" level=info msg="CreateContainer within sandbox \"ca48c5aa9ecc3bd15efaa57323464a954ac6a138d427167c318026b22e85bd4d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:30:59.758023 env[1840]: time="2024-12-13T14:30:59.757627799Z" level=info msg="CreateContainer within sandbox \"ca48c5aa9ecc3bd15efaa57323464a954ac6a138d427167c318026b22e85bd4d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"95f5c56774bbd660e58f33d6605e6f8df47634e24373a06942d1bbe6cd2dde22\"" Dec 13 14:30:59.760644 env[1840]: time="2024-12-13T14:30:59.760602227Z" level=info msg="StartContainer for \"95f5c56774bbd660e58f33d6605e6f8df47634e24373a06942d1bbe6cd2dde22\"" Dec 13 14:30:59.895612 env[1840]: time="2024-12-13T14:30:59.890902075Z" level=info msg="StartContainer for \"95f5c56774bbd660e58f33d6605e6f8df47634e24373a06942d1bbe6cd2dde22\" returns successfully" Dec 13 14:31:00.181830 kubelet[2235]: E1213 14:31:00.181732 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:00.878647 kubelet[2235]: I1213 14:31:00.878609 2235 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-t7cqn" podStartSLOduration=2.715697091 podStartE2EDuration="25.878529248s" podCreationTimestamp="2024-12-13 14:30:35 +0000 UTC" firstStartedPulling="2024-12-13 14:30:36.55750729 +0000 UTC m=+82.129044171" lastFinishedPulling="2024-12-13 14:30:59.720339435 +0000 UTC m=+105.291876328" observedRunningTime="2024-12-13 14:31:00.877966398 +0000 UTC m=+106.449503300" watchObservedRunningTime="2024-12-13 14:31:00.878529248 +0000 UTC m=+106.450066150" Dec 13 14:31:01.182577 kubelet[2235]: E1213 14:31:01.182445 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:01.462485 systemd[1]: run-containerd-runc-k8s.io-3e7e978d64afde2d6f1d1420b678927e019b596c978183c87b91d7f20ffadd8c-runc.NsaXCX.mount: Deactivated successfully. Dec 13 14:31:02.183144 kubelet[2235]: E1213 14:31:02.183025 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:03.183801 kubelet[2235]: E1213 14:31:03.183751 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:31:04.184490 kubelet[2235]: E1213 14:31:04.184434 2235 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"