Dec 13 14:24:08.187395 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:24:08.187432 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:08.187448 kernel: BIOS-provided physical RAM map: Dec 13 14:24:08.187477 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:24:08.187489 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:24:08.187500 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:24:08.187518 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:24:08.187530 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:24:08.187542 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:24:08.187554 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:24:08.187566 kernel: NX (Execute Disable) protection: active Dec 13 14:24:08.187578 kernel: SMBIOS 2.7 present. Dec 13 14:24:08.187590 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:24:08.187603 kernel: Hypervisor detected: KVM Dec 13 14:24:08.187621 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:24:08.187635 kernel: kvm-clock: cpu 0, msr 6919a001, primary cpu clock Dec 13 14:24:08.187648 kernel: kvm-clock: using sched offset of 7737223907 cycles Dec 13 14:24:08.187662 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:24:08.187676 kernel: tsc: Detected 2499.996 MHz processor Dec 13 14:24:08.187689 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:24:08.187706 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:24:08.187720 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:24:08.187734 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:24:08.187747 kernel: Using GB pages for direct mapping Dec 13 14:24:08.187760 kernel: ACPI: Early table checksum verification disabled Dec 13 14:24:08.187773 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:24:08.187787 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:24:08.187802 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:24:08.187816 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:24:08.187835 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:24:08.187850 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:24:08.187864 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:24:08.187879 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:24:08.187893 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:24:08.187907 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:24:08.187922 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:24:08.187935 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:24:08.187953 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:24:08.187966 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:24:08.187980 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:24:08.188000 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:24:08.188014 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:24:08.188029 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:24:08.188044 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:24:08.188130 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:24:08.188144 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:24:08.188158 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:24:08.188172 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:24:08.188186 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:24:08.188200 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:24:08.188215 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:24:08.188229 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:24:08.188247 kernel: Zone ranges: Dec 13 14:24:08.188262 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:24:08.188276 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:24:08.188290 kernel: Normal empty Dec 13 14:24:08.188305 kernel: Movable zone start for each node Dec 13 14:24:08.188319 kernel: Early memory node ranges Dec 13 14:24:08.188333 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:24:08.188348 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:24:08.188362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:24:08.188380 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:24:08.188394 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:24:08.188409 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:24:08.188424 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:24:08.188439 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:24:08.188472 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:24:08.188484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:24:08.188495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:24:08.188510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:24:08.188527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:24:08.188542 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:24:08.188559 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:24:08.188574 kernel: TSC deadline timer available Dec 13 14:24:08.188589 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:24:08.188604 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:24:08.188618 kernel: Booting paravirtualized kernel on KVM Dec 13 14:24:08.188633 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:24:08.188648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:24:08.188665 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:24:08.188680 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:24:08.188694 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:24:08.188709 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:24:08.188723 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:24:08.188738 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:24:08.188752 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:24:08.188766 kernel: Policy zone: DMA32 Dec 13 14:24:08.188783 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:08.188802 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:24:08.188818 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:24:08.188833 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:24:08.188849 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:24:08.188865 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:24:08.188881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:24:08.188895 kernel: Kernel/User page tables isolation: enabled Dec 13 14:24:08.188910 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:24:08.188929 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:24:08.188944 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:24:08.188960 kernel: rcu: RCU event tracing is enabled. Dec 13 14:24:08.188976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:24:08.188992 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:24:08.189007 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:24:08.189022 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:24:08.189037 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:24:08.189051 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:24:08.189069 kernel: random: crng init done Dec 13 14:24:08.189083 kernel: Console: colour VGA+ 80x25 Dec 13 14:24:08.189097 kernel: printk: console [ttyS0] enabled Dec 13 14:24:08.189112 kernel: ACPI: Core revision 20210730 Dec 13 14:24:08.189126 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:24:08.189141 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:24:08.189156 kernel: x2apic enabled Dec 13 14:24:08.189170 kernel: Switched APIC routing to physical x2apic. Dec 13 14:24:08.189185 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:24:08.189203 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 14:24:08.189218 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:24:08.189233 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:24:08.189249 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:24:08.189273 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:24:08.189291 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:24:08.189306 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:24:08.189321 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:24:08.189336 kernel: RETBleed: Vulnerable Dec 13 14:24:08.189351 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:24:08.189367 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:24:08.189380 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:24:08.189510 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:24:08.189525 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:24:08.189544 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:24:08.189559 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:24:08.189574 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:24:08.189587 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:24:08.189774 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:24:08.189795 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:24:08.189810 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:24:08.189826 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:24:08.189842 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:24:08.189856 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:24:08.189872 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:24:08.189886 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:24:08.189901 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:24:08.189917 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:24:08.189932 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:24:08.189947 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:24:08.189962 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:24:08.189979 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:24:08.189994 kernel: LSM: Security Framework initializing Dec 13 14:24:08.190009 kernel: SELinux: Initializing. Dec 13 14:24:08.190025 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:24:08.190040 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:24:08.190055 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:24:08.190070 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:24:08.190086 kernel: signal: max sigframe size: 3632 Dec 13 14:24:08.190101 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:24:08.190116 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:24:08.190134 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:24:08.190149 kernel: x86: Booting SMP configuration: Dec 13 14:24:08.190165 kernel: .... node #0, CPUs: #1 Dec 13 14:24:08.190180 kernel: kvm-clock: cpu 1, msr 6919a041, secondary cpu clock Dec 13 14:24:08.190195 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:24:08.190211 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:24:08.190227 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:24:08.190243 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:24:08.190258 kernel: smpboot: Max logical packages: 1 Dec 13 14:24:08.190276 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 14:24:08.190291 kernel: devtmpfs: initialized Dec 13 14:24:08.190307 kernel: x86/mm: Memory block size: 128MB Dec 13 14:24:08.190322 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:24:08.190335 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:24:08.190348 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:24:08.190363 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:24:08.190377 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:24:08.190393 kernel: audit: type=2000 audit(1734099846.788:1): state=initialized audit_enabled=0 res=1 Dec 13 14:24:08.190410 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:24:08.190425 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:24:08.190440 kernel: cpuidle: using governor menu Dec 13 14:24:08.190473 kernel: ACPI: bus type PCI registered Dec 13 14:24:08.190486 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:24:08.190581 kernel: dca service started, version 1.12.1 Dec 13 14:24:08.190599 kernel: PCI: Using configuration type 1 for base access Dec 13 14:24:08.190615 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:24:08.190629 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:24:08.190647 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:24:08.190659 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:24:08.190671 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:24:08.190684 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:24:08.190697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:24:08.190711 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:24:08.190724 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:24:08.190738 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:24:08.190753 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:24:08.190874 kernel: ACPI: Interpreter enabled Dec 13 14:24:08.190917 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:24:08.190932 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:24:08.190947 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:24:08.191007 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:24:08.191027 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:24:08.191313 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:24:08.191497 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:24:08.191523 kernel: acpiphp: Slot [3] registered Dec 13 14:24:08.191539 kernel: acpiphp: Slot [4] registered Dec 13 14:24:08.191554 kernel: acpiphp: Slot [5] registered Dec 13 14:24:08.191570 kernel: acpiphp: Slot [6] registered Dec 13 14:24:08.191585 kernel: acpiphp: Slot [7] registered Dec 13 14:24:08.191600 kernel: acpiphp: Slot [8] registered Dec 13 14:24:08.191616 kernel: acpiphp: Slot [9] registered Dec 13 14:24:08.191631 kernel: acpiphp: Slot [10] registered Dec 13 14:24:08.191646 kernel: acpiphp: Slot [11] registered Dec 13 14:24:08.191664 kernel: acpiphp: Slot [12] registered Dec 13 14:24:08.191679 kernel: acpiphp: Slot [13] registered Dec 13 14:24:08.191694 kernel: acpiphp: Slot [14] registered Dec 13 14:24:08.191709 kernel: acpiphp: Slot [15] registered Dec 13 14:24:08.191724 kernel: acpiphp: Slot [16] registered Dec 13 14:24:08.191739 kernel: acpiphp: Slot [17] registered Dec 13 14:24:08.191754 kernel: acpiphp: Slot [18] registered Dec 13 14:24:08.191770 kernel: acpiphp: Slot [19] registered Dec 13 14:24:08.191785 kernel: acpiphp: Slot [20] registered Dec 13 14:24:08.191803 kernel: acpiphp: Slot [21] registered Dec 13 14:24:08.191819 kernel: acpiphp: Slot [22] registered Dec 13 14:24:08.191834 kernel: acpiphp: Slot [23] registered Dec 13 14:24:08.191849 kernel: acpiphp: Slot [24] registered Dec 13 14:24:08.191864 kernel: acpiphp: Slot [25] registered Dec 13 14:24:08.191879 kernel: acpiphp: Slot [26] registered Dec 13 14:24:08.191894 kernel: acpiphp: Slot [27] registered Dec 13 14:24:08.191909 kernel: acpiphp: Slot [28] registered Dec 13 14:24:08.191925 kernel: acpiphp: Slot [29] registered Dec 13 14:24:08.191939 kernel: acpiphp: Slot [30] registered Dec 13 14:24:08.191955 kernel: acpiphp: Slot [31] registered Dec 13 14:24:08.191970 kernel: PCI host bridge to bus 0000:00 Dec 13 14:24:08.192175 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:24:08.192431 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:24:08.192579 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:24:08.192706 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:24:08.192964 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:24:08.193135 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:24:08.193286 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:24:08.193573 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:24:08.193722 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:24:08.193861 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:24:08.194075 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:24:08.194217 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:24:08.194360 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:24:08.194511 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:24:08.194653 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:24:08.194790 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:24:08.195637 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:24:08.196144 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:24:08.196572 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:24:08.196727 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:24:08.197889 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:24:08.198644 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:24:08.198859 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:24:08.199052 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:24:08.199075 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:24:08.199098 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:24:08.199115 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:24:08.199131 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:24:08.199147 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:24:08.199163 kernel: iommu: Default domain type: Translated Dec 13 14:24:08.199178 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:24:08.209709 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:24:08.209882 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:24:08.210024 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:24:08.210051 kernel: vgaarb: loaded Dec 13 14:24:08.210068 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:24:08.210084 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:24:08.210100 kernel: PTP clock support registered Dec 13 14:24:08.210115 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:24:08.210131 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:24:08.210147 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:24:08.210162 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:24:08.210180 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:24:08.210195 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:24:08.210211 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:24:08.210227 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:24:08.210243 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:24:08.210259 kernel: pnp: PnP ACPI init Dec 13 14:24:08.210274 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:24:08.210291 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:24:08.210306 kernel: NET: Registered PF_INET protocol family Dec 13 14:24:08.210326 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:24:08.210341 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:24:08.210357 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:24:08.210373 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:24:08.210388 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:24:08.210403 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:24:08.210499 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:24:08.210540 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:24:08.210556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:24:08.210824 kernel: NET: Registered PF_XDP protocol family Dec 13 14:24:08.210991 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:24:08.211124 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:24:08.211251 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:24:08.230641 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:24:08.230897 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:24:08.231108 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:24:08.231136 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:24:08.231151 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:24:08.231164 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:24:08.231177 kernel: clocksource: Switched to clocksource tsc Dec 13 14:24:08.231190 kernel: Initialise system trusted keyrings Dec 13 14:24:08.231203 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:24:08.231216 kernel: Key type asymmetric registered Dec 13 14:24:08.231228 kernel: Asymmetric key parser 'x509' registered Dec 13 14:24:08.231241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:24:08.231256 kernel: io scheduler mq-deadline registered Dec 13 14:24:08.231268 kernel: io scheduler kyber registered Dec 13 14:24:08.231280 kernel: io scheduler bfq registered Dec 13 14:24:08.231294 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:24:08.231306 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:24:08.231319 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:24:08.231332 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:24:08.231345 kernel: i8042: Warning: Keylock active Dec 13 14:24:08.231358 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:24:08.231373 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:24:08.231524 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:24:08.231643 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:24:08.231759 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:24:07 UTC (1734099847) Dec 13 14:24:08.231874 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:24:08.231889 kernel: intel_pstate: CPU model not supported Dec 13 14:24:08.231902 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:24:08.231914 kernel: Segment Routing with IPv6 Dec 13 14:24:08.231931 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:24:08.231944 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:24:08.231956 kernel: Key type dns_resolver registered Dec 13 14:24:08.231968 kernel: IPI shorthand broadcast: enabled Dec 13 14:24:08.231980 kernel: sched_clock: Marking stable (381352853, 266136122)->(750660526, -103171551) Dec 13 14:24:08.231993 kernel: registered taskstats version 1 Dec 13 14:24:08.232006 kernel: Loading compiled-in X.509 certificates Dec 13 14:24:08.232019 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:24:08.232031 kernel: Key type .fscrypt registered Dec 13 14:24:08.232047 kernel: Key type fscrypt-provisioning registered Dec 13 14:24:08.232060 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:24:08.232072 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:24:08.232085 kernel: ima: No architecture policies found Dec 13 14:24:08.232097 kernel: clk: Disabling unused clocks Dec 13 14:24:08.232110 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:24:08.232123 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:24:08.232137 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:24:08.232150 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:24:08.232165 kernel: Run /init as init process Dec 13 14:24:08.232178 kernel: with arguments: Dec 13 14:24:08.232191 kernel: /init Dec 13 14:24:08.232203 kernel: with environment: Dec 13 14:24:08.232215 kernel: HOME=/ Dec 13 14:24:08.232228 kernel: TERM=linux Dec 13 14:24:08.232240 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:24:08.232256 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:24:08.232276 systemd[1]: Detected virtualization amazon. Dec 13 14:24:08.232289 systemd[1]: Detected architecture x86-64. Dec 13 14:24:08.232302 systemd[1]: Running in initrd. Dec 13 14:24:08.232316 systemd[1]: No hostname configured, using default hostname. Dec 13 14:24:08.232343 systemd[1]: Hostname set to . Dec 13 14:24:08.232362 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:24:08.232375 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:24:08.232389 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:24:08.232403 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:24:08.232417 systemd[1]: Reached target cryptsetup.target. Dec 13 14:24:08.232431 systemd[1]: Reached target paths.target. Dec 13 14:24:08.232444 systemd[1]: Reached target slices.target. Dec 13 14:24:08.234510 systemd[1]: Reached target swap.target. Dec 13 14:24:08.234536 systemd[1]: Reached target timers.target. Dec 13 14:24:08.234560 systemd[1]: Listening on iscsid.socket. Dec 13 14:24:08.234575 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:24:08.234590 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:24:08.234604 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:24:08.234618 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:24:08.234636 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:24:08.234650 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:24:08.234664 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:24:08.234682 systemd[1]: Reached target sockets.target. Dec 13 14:24:08.234696 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:24:08.234710 systemd[1]: Finished network-cleanup.service. Dec 13 14:24:08.234724 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:24:08.234738 systemd[1]: Starting systemd-journald.service... Dec 13 14:24:08.234751 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:24:08.234880 systemd[1]: Starting systemd-resolved.service... Dec 13 14:24:08.234897 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:24:08.234912 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:24:08.234935 systemd-journald[185]: Journal started Dec 13 14:24:08.235200 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2e63c84278ad4b074af4ecaed8659d) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:24:08.238403 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:24:08.362491 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:24:08.362527 kernel: Bridge firewalling registered Dec 13 14:24:08.362540 kernel: SCSI subsystem initialized Dec 13 14:24:08.362551 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:24:08.362566 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:24:08.362580 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:24:08.362591 systemd[1]: Started systemd-journald.service. Dec 13 14:24:08.362608 kernel: audit: type=1130 audit(1734099848.354:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.238680 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:24:08.368567 kernel: audit: type=1130 audit(1734099848.361:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.238728 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:24:08.381909 kernel: audit: type=1130 audit(1734099848.367:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.242560 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:24:08.390884 kernel: audit: type=1130 audit(1734099848.380:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.390911 kernel: audit: type=1130 audit(1734099848.385:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.247371 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:24:08.299119 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:24:08.343828 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:24:08.404259 kernel: audit: type=1130 audit(1734099848.392:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.365718 systemd[1]: Started systemd-resolved.service. Dec 13 14:24:08.368689 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:24:08.382180 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:24:08.391093 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:24:08.404375 systemd[1]: Reached target nss-lookup.target. Dec 13 14:24:08.413734 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:24:08.415487 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:24:08.416882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:24:08.428438 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:24:08.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.433712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:24:08.436778 kernel: audit: type=1130 audit(1734099848.427:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.441712 kernel: audit: type=1130 audit(1734099848.434:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.444711 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:24:08.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.449666 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:24:08.451380 kernel: audit: type=1130 audit(1734099848.444:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.460996 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:24:08.464300 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:24:08.533480 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:24:08.553482 kernel: iscsi: registered transport (tcp) Dec 13 14:24:08.582485 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:24:08.582551 kernel: QLogic iSCSI HBA Driver Dec 13 14:24:08.617688 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:24:08.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:08.620377 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:24:08.677481 kernel: raid6: avx512x4 gen() 16239 MB/s Dec 13 14:24:08.694481 kernel: raid6: avx512x4 xor() 6625 MB/s Dec 13 14:24:08.711479 kernel: raid6: avx512x2 gen() 17182 MB/s Dec 13 14:24:08.728483 kernel: raid6: avx512x2 xor() 21610 MB/s Dec 13 14:24:08.745487 kernel: raid6: avx512x1 gen() 16010 MB/s Dec 13 14:24:08.762484 kernel: raid6: avx512x1 xor() 20732 MB/s Dec 13 14:24:08.780487 kernel: raid6: avx2x4 gen() 15395 MB/s Dec 13 14:24:08.797489 kernel: raid6: avx2x4 xor() 6912 MB/s Dec 13 14:24:08.814485 kernel: raid6: avx2x2 gen() 11242 MB/s Dec 13 14:24:08.831487 kernel: raid6: avx2x2 xor() 14100 MB/s Dec 13 14:24:08.848481 kernel: raid6: avx2x1 gen() 12738 MB/s Dec 13 14:24:08.866129 kernel: raid6: avx2x1 xor() 13898 MB/s Dec 13 14:24:08.883489 kernel: raid6: sse2x4 gen() 5679 MB/s Dec 13 14:24:08.901499 kernel: raid6: sse2x4 xor() 4472 MB/s Dec 13 14:24:08.919487 kernel: raid6: sse2x2 gen() 8389 MB/s Dec 13 14:24:08.938503 kernel: raid6: sse2x2 xor() 4221 MB/s Dec 13 14:24:08.957539 kernel: raid6: sse2x1 gen() 5800 MB/s Dec 13 14:24:08.982358 kernel: raid6: sse2x1 xor() 1449 MB/s Dec 13 14:24:08.982437 kernel: raid6: using algorithm avx512x2 gen() 17182 MB/s Dec 13 14:24:08.982465 kernel: raid6: .... xor() 21610 MB/s, rmw enabled Dec 13 14:24:08.982482 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:24:08.996480 kernel: xor: automatically using best checksumming function avx Dec 13 14:24:09.100482 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:24:09.108807 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:24:09.111330 systemd[1]: Starting systemd-udevd.service... Dec 13 14:24:09.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:09.109000 audit: BPF prog-id=7 op=LOAD Dec 13 14:24:09.109000 audit: BPF prog-id=8 op=LOAD Dec 13 14:24:09.131733 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:24:09.141677 systemd[1]: Started systemd-udevd.service. Dec 13 14:24:09.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:09.144051 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:24:09.174635 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Dec 13 14:24:09.228276 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:24:09.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:09.231635 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:24:09.300480 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:24:09.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:09.424426 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:24:09.472156 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:24:09.472383 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:24:09.472404 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:24:09.472571 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:dd:0b:0c:f5:73 Dec 13 14:24:09.474238 (udev-worker)[430]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:24:09.648608 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:24:09.648861 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:24:09.648982 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:24:09.649160 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:24:09.649180 kernel: AES CTR mode by8 optimization enabled Dec 13 14:24:09.649198 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:24:09.649218 kernel: GPT:9289727 != 16777215 Dec 13 14:24:09.649240 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:24:09.649257 kernel: GPT:9289727 != 16777215 Dec 13 14:24:09.649273 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:24:09.649291 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:24:09.649309 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (441) Dec 13 14:24:09.705278 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:24:09.726152 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:24:09.729890 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:24:09.743772 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:24:09.756235 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:24:09.767637 systemd[1]: Starting disk-uuid.service... Dec 13 14:24:09.780237 disk-uuid[593]: Primary Header is updated. Dec 13 14:24:09.780237 disk-uuid[593]: Secondary Entries is updated. Dec 13 14:24:09.780237 disk-uuid[593]: Secondary Header is updated. Dec 13 14:24:09.788472 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:24:09.796480 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:24:09.802487 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:24:10.805561 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:24:10.806226 disk-uuid[594]: The operation has completed successfully. Dec 13 14:24:10.982119 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:24:10.982261 systemd[1]: Finished disk-uuid.service. Dec 13 14:24:10.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:10.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:10.995728 systemd[1]: Starting verity-setup.service... Dec 13 14:24:11.031198 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:24:11.176742 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:24:11.180361 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:24:11.182192 systemd[1]: Finished verity-setup.service. Dec 13 14:24:11.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.269722 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:24:11.270199 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:24:11.271714 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:24:11.274494 systemd[1]: Starting ignition-setup.service... Dec 13 14:24:11.278383 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:24:11.306046 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:11.306109 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:24:11.306132 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:24:11.332486 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:24:11.349338 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:24:11.364158 systemd[1]: Finished ignition-setup.service. Dec 13 14:24:11.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.367138 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:24:11.387954 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:24:11.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.389000 audit: BPF prog-id=9 op=LOAD Dec 13 14:24:11.392042 systemd[1]: Starting systemd-networkd.service... Dec 13 14:24:11.446442 systemd-networkd[1105]: lo: Link UP Dec 13 14:24:11.446563 systemd-networkd[1105]: lo: Gained carrier Dec 13 14:24:11.447392 systemd-networkd[1105]: Enumeration completed Dec 13 14:24:11.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.451641 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:24:11.454989 systemd[1]: Started systemd-networkd.service. Dec 13 14:24:11.461510 systemd[1]: Reached target network.target. Dec 13 14:24:11.477852 systemd[1]: Starting iscsiuio.service... Dec 13 14:24:11.477925 systemd-networkd[1105]: eth0: Link UP Dec 13 14:24:11.477931 systemd-networkd[1105]: eth0: Gained carrier Dec 13 14:24:11.490556 systemd[1]: Started iscsiuio.service. Dec 13 14:24:11.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.493566 systemd[1]: Starting iscsid.service... Dec 13 14:24:11.500027 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:24:11.500027 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:24:11.500027 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:24:11.500027 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:24:11.509257 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:24:11.509257 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:24:11.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.509167 systemd[1]: Started iscsid.service. Dec 13 14:24:11.511438 systemd-networkd[1105]: eth0: DHCPv4 address 172.31.21.15/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:24:11.512663 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:24:11.553541 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:24:11.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:11.553951 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:24:11.557946 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:24:11.560093 systemd[1]: Reached target remote-fs.target. Dec 13 14:24:11.562180 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:24:11.575229 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:24:11.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.139077 ignition[1087]: Ignition 2.14.0 Dec 13 14:24:12.139093 ignition[1087]: Stage: fetch-offline Dec 13 14:24:12.139232 ignition[1087]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:12.139330 ignition[1087]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:12.159934 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:12.163291 ignition[1087]: Ignition finished successfully Dec 13 14:24:12.165417 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:24:12.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.168445 systemd[1]: Starting ignition-fetch.service... Dec 13 14:24:12.188732 ignition[1130]: Ignition 2.14.0 Dec 13 14:24:12.188747 ignition[1130]: Stage: fetch Dec 13 14:24:12.189039 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:12.189072 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:12.214845 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:12.217185 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:12.233946 ignition[1130]: INFO : PUT result: OK Dec 13 14:24:12.245873 ignition[1130]: DEBUG : parsed url from cmdline: "" Dec 13 14:24:12.245873 ignition[1130]: INFO : no config URL provided Dec 13 14:24:12.245873 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:24:12.245873 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:24:12.261311 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:12.261311 ignition[1130]: INFO : PUT result: OK Dec 13 14:24:12.261311 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:24:12.272576 ignition[1130]: INFO : GET result: OK Dec 13 14:24:12.272576 ignition[1130]: DEBUG : parsing config with SHA512: 57c0686633279e485e8393097922f8e45540234e04cc0dcde3eebcdfcd6d6c7146cc4b172d97fcf1e22b187e2a19369d7359ebb7149e23d26cbc2c8fa9d8c10a Dec 13 14:24:12.288442 unknown[1130]: fetched base config from "system" Dec 13 14:24:12.288544 unknown[1130]: fetched base config from "system" Dec 13 14:24:12.289745 ignition[1130]: fetch: fetch complete Dec 13 14:24:12.288554 unknown[1130]: fetched user config from "aws" Dec 13 14:24:12.298958 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 13 14:24:12.298991 kernel: audit: type=1130 audit(1734099852.291:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.289754 ignition[1130]: fetch: fetch passed Dec 13 14:24:12.291688 systemd[1]: Finished ignition-fetch.service. Dec 13 14:24:12.289814 ignition[1130]: Ignition finished successfully Dec 13 14:24:12.293839 systemd[1]: Starting ignition-kargs.service... Dec 13 14:24:12.315775 ignition[1136]: Ignition 2.14.0 Dec 13 14:24:12.315793 ignition[1136]: Stage: kargs Dec 13 14:24:12.316040 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:12.316073 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:12.325578 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:12.327553 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:12.329059 ignition[1136]: INFO : PUT result: OK Dec 13 14:24:12.332336 ignition[1136]: kargs: kargs passed Dec 13 14:24:12.332394 ignition[1136]: Ignition finished successfully Dec 13 14:24:12.334478 systemd[1]: Finished ignition-kargs.service. Dec 13 14:24:12.341828 kernel: audit: type=1130 audit(1734099852.335:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.338383 systemd[1]: Starting ignition-disks.service... Dec 13 14:24:12.354443 ignition[1142]: Ignition 2.14.0 Dec 13 14:24:12.354506 ignition[1142]: Stage: disks Dec 13 14:24:12.355237 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:12.355315 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:12.367142 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:12.368918 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:12.371294 ignition[1142]: INFO : PUT result: OK Dec 13 14:24:12.375074 ignition[1142]: disks: disks passed Dec 13 14:24:12.375140 ignition[1142]: Ignition finished successfully Dec 13 14:24:12.378704 systemd[1]: Finished ignition-disks.service. Dec 13 14:24:12.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.379096 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:24:12.391727 kernel: audit: type=1130 audit(1734099852.377:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.386911 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:24:12.388139 systemd[1]: Reached target local-fs.target. Dec 13 14:24:12.391750 systemd[1]: Reached target sysinit.target. Dec 13 14:24:12.392873 systemd[1]: Reached target basic.target. Dec 13 14:24:12.396397 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:24:12.443240 systemd-fsck[1150]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:24:12.447585 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:24:12.456247 kernel: audit: type=1130 audit(1734099852.447:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.450699 systemd[1]: Mounting sysroot.mount... Dec 13 14:24:12.473548 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:24:12.475719 systemd[1]: Mounted sysroot.mount. Dec 13 14:24:12.478966 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:24:12.490662 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:24:12.493075 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:24:12.493157 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:24:12.493198 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:24:12.503493 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:24:12.529332 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:24:12.536787 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:24:12.557153 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:24:12.563160 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Dec 13 14:24:12.563193 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:12.563205 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:24:12.563216 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:24:12.580751 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:24:12.591277 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:24:12.609682 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:24:12.630218 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:24:12.646055 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:24:12.832791 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:24:12.841519 kernel: audit: type=1130 audit(1734099852.831:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.834348 systemd[1]: Starting ignition-mount.service... Dec 13 14:24:12.842856 systemd[1]: Starting sysroot-boot.service... Dec 13 14:24:12.851075 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:24:12.851247 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:24:12.871162 ignition[1232]: INFO : Ignition 2.14.0 Dec 13 14:24:12.872283 ignition[1232]: INFO : Stage: mount Dec 13 14:24:12.873297 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:12.874795 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:12.886554 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:12.890719 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:12.892789 ignition[1232]: INFO : PUT result: OK Dec 13 14:24:12.896932 ignition[1232]: INFO : mount: mount passed Dec 13 14:24:12.898559 ignition[1232]: INFO : Ignition finished successfully Dec 13 14:24:12.899956 systemd[1]: Finished ignition-mount.service. Dec 13 14:24:12.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.902672 systemd[1]: Starting ignition-files.service... Dec 13 14:24:12.908089 kernel: audit: type=1130 audit(1734099852.900:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.910487 systemd[1]: Finished sysroot-boot.service. Dec 13 14:24:12.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.916502 kernel: audit: type=1130 audit(1734099852.910:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:12.918551 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:24:12.936479 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) Dec 13 14:24:12.939385 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:24:12.939446 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:24:12.939477 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:24:12.967566 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:24:12.974202 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:24:13.008090 ignition[1262]: INFO : Ignition 2.14.0 Dec 13 14:24:13.008090 ignition[1262]: INFO : Stage: files Dec 13 14:24:13.010216 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:13.010216 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:13.021552 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:13.024093 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:13.025837 ignition[1262]: INFO : PUT result: OK Dec 13 14:24:13.030100 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:24:13.037113 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:24:13.038777 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:24:13.055323 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:24:13.057550 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:24:13.060421 unknown[1262]: wrote ssh authorized keys file for user: core Dec 13 14:24:13.061718 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:24:13.076606 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:24:13.078855 ignition[1262]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:24:13.096587 systemd-networkd[1105]: eth0: Gained IPv6LL Dec 13 14:24:13.235699 ignition[1262]: INFO : GET result: OK Dec 13 14:24:13.633940 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:24:13.633940 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:24:13.640298 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:24:13.640298 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:24:13.640298 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:13.656039 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4193507641" Dec 13 14:24:13.657967 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4193507641": device or resource busy Dec 13 14:24:13.657967 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4193507641", trying btrfs: device or resource busy Dec 13 14:24:13.657967 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4193507641" Dec 13 14:24:13.665881 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1264) Dec 13 14:24:13.665912 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4193507641" Dec 13 14:24:13.667571 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem4193507641" Dec 13 14:24:13.669073 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem4193507641" Dec 13 14:24:13.669073 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:24:13.669073 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:24:13.669073 ignition[1262]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:24:13.679401 systemd[1]: mnt-oem4193507641.mount: Deactivated successfully. Dec 13 14:24:14.094634 ignition[1262]: INFO : GET result: OK Dec 13 14:24:14.362097 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:24:14.362097 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:24:14.366105 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:24:14.366105 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:24:14.370420 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:24:14.370420 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:24:14.374198 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:24:14.376234 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:24:14.379002 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:24:14.379002 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:14.379002 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:14.389004 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:24:14.389004 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:14.401062 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344691967" Dec 13 14:24:14.401062 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344691967": device or resource busy Dec 13 14:24:14.401062 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem344691967", trying btrfs: device or resource busy Dec 13 14:24:14.401062 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344691967" Dec 13 14:24:14.408571 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem344691967" Dec 13 14:24:14.408571 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem344691967" Dec 13 14:24:14.408571 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem344691967" Dec 13 14:24:14.408571 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:24:14.408571 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:24:14.408571 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:14.438399 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288928878" Dec 13 14:24:14.442070 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288928878": device or resource busy Dec 13 14:24:14.442070 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4288928878", trying btrfs: device or resource busy Dec 13 14:24:14.442070 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288928878" Dec 13 14:24:14.442070 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4288928878" Dec 13 14:24:14.442070 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem4288928878" Dec 13 14:24:14.442070 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem4288928878" Dec 13 14:24:14.442070 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:24:14.442070 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:14.442070 ignition[1262]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:24:14.475697 systemd[1]: mnt-oem4288928878.mount: Deactivated successfully. Dec 13 14:24:14.767271 ignition[1262]: INFO : GET result: OK Dec 13 14:24:15.219913 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:24:15.219913 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:24:15.224740 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:24:15.230400 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1067822723" Dec 13 14:24:15.232016 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1067822723": device or resource busy Dec 13 14:24:15.232016 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1067822723", trying btrfs: device or resource busy Dec 13 14:24:15.232016 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1067822723" Dec 13 14:24:15.243340 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1067822723" Dec 13 14:24:15.243340 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem1067822723" Dec 13 14:24:15.243340 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem1067822723" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(10): [started] processing unit "nvidia.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(10): [finished] processing unit "nvidia.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:24:15.243340 ignition[1262]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:24:15.281933 ignition[1262]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:24:15.281933 ignition[1262]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:24:15.281933 ignition[1262]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:24:15.288287 systemd[1]: mnt-oem1067822723.mount: Deactivated successfully. Dec 13 14:24:15.294976 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:24:15.298034 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:24:15.298034 ignition[1262]: INFO : files: files passed Dec 13 14:24:15.298034 ignition[1262]: INFO : Ignition finished successfully Dec 13 14:24:15.303178 systemd[1]: Finished ignition-files.service. Dec 13 14:24:15.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.309511 kernel: audit: type=1130 audit(1734099855.303:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.311919 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:24:15.313119 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:24:15.317882 systemd[1]: Starting ignition-quench.service... Dec 13 14:24:15.322510 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:24:15.323793 systemd[1]: Finished ignition-quench.service. Dec 13 14:24:15.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.326283 initrd-setup-root-after-ignition[1287]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:24:15.334276 kernel: audit: type=1130 audit(1734099855.324:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.334311 kernel: audit: type=1131 audit(1734099855.328:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.334566 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:24:15.336847 systemd[1]: Reached target ignition-complete.target. Dec 13 14:24:15.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.339502 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:24:15.356830 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:24:15.356943 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:24:15.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.359240 systemd[1]: Reached target initrd-fs.target. Dec 13 14:24:15.360766 systemd[1]: Reached target initrd.target. Dec 13 14:24:15.361649 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:24:15.363050 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:24:15.376957 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:24:15.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.379882 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:24:15.392174 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:24:15.392404 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:24:15.395996 systemd[1]: Stopped target timers.target. Dec 13 14:24:15.398279 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:24:15.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.398406 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:24:15.401076 systemd[1]: Stopped target initrd.target. Dec 13 14:24:15.403853 systemd[1]: Stopped target basic.target. Dec 13 14:24:15.405944 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:24:15.407132 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:24:15.411240 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:24:15.413872 systemd[1]: Stopped target remote-fs.target. Dec 13 14:24:15.415509 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:24:15.417503 systemd[1]: Stopped target sysinit.target. Dec 13 14:24:15.419474 systemd[1]: Stopped target local-fs.target. Dec 13 14:24:15.421055 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:24:15.423552 systemd[1]: Stopped target swap.target. Dec 13 14:24:15.425647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:24:15.425817 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:24:15.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.428855 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:24:15.431721 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:24:15.433136 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:24:15.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.436989 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:24:15.437099 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:24:15.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.441285 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:24:15.442766 systemd[1]: Stopped ignition-files.service. Dec 13 14:24:15.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.460939 iscsid[1111]: iscsid shutting down. Dec 13 14:24:15.445879 systemd[1]: Stopping ignition-mount.service... Dec 13 14:24:15.447030 systemd[1]: Stopping iscsid.service... Dec 13 14:24:15.449219 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:24:15.451143 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:24:15.452187 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:24:15.453945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:24:15.473407 ignition[1300]: INFO : Ignition 2.14.0 Dec 13 14:24:15.473407 ignition[1300]: INFO : Stage: umount Dec 13 14:24:15.473407 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:24:15.473407 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:24:15.454138 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:24:15.476523 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:24:15.476639 systemd[1]: Stopped iscsid.service. Dec 13 14:24:15.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.484151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:24:15.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.484324 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:24:15.486541 systemd[1]: Stopping iscsiuio.service... Dec 13 14:24:15.490545 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:24:15.490545 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:24:15.495508 ignition[1300]: INFO : PUT result: OK Dec 13 14:24:15.496950 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:24:15.498119 systemd[1]: Stopped iscsiuio.service. Dec 13 14:24:15.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.504384 ignition[1300]: INFO : umount: umount passed Dec 13 14:24:15.505418 ignition[1300]: INFO : Ignition finished successfully Dec 13 14:24:15.506801 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:24:15.506995 systemd[1]: Stopped ignition-mount.service. Dec 13 14:24:15.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.509394 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:24:15.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.509561 systemd[1]: Stopped ignition-disks.service. Dec 13 14:24:15.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.511183 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:24:15.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.511239 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:24:15.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.513616 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:24:15.513713 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:24:15.515425 systemd[1]: Stopped target network.target. Dec 13 14:24:15.517205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:24:15.517267 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:24:15.518483 systemd[1]: Stopped target paths.target. Dec 13 14:24:15.520087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:24:15.521778 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:24:15.532403 systemd[1]: Stopped target slices.target. Dec 13 14:24:15.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.533609 systemd[1]: Stopped target sockets.target. Dec 13 14:24:15.534765 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:24:15.534811 systemd[1]: Closed iscsid.socket. Dec 13 14:24:15.535763 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:24:15.535803 systemd[1]: Closed iscsiuio.socket. Dec 13 14:24:15.536810 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:24:15.536978 systemd[1]: Stopped ignition-setup.service. Dec 13 14:24:15.538089 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:24:15.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.538927 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:24:15.540776 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:24:15.545728 systemd-networkd[1105]: eth0: DHCPv6 lease lost Dec 13 14:24:15.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.547799 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:24:15.547898 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:24:15.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.551180 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:24:15.556000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:24:15.556000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:24:15.551313 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:24:15.555164 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:24:15.555277 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:24:15.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.558367 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:24:15.558415 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:24:15.561335 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:24:15.562498 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:24:15.568875 systemd[1]: Stopping network-cleanup.service... Dec 13 14:24:15.571576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:24:15.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.571709 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:24:15.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.574939 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:24:15.575005 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:24:15.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.576857 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:24:15.576918 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:24:15.580895 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:24:15.595650 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:24:15.599328 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:24:15.603036 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:24:15.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.605745 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:24:15.606903 systemd[1]: Stopped network-cleanup.service. Dec 13 14:24:15.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.608766 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:24:15.608828 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:24:15.611762 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:24:15.611807 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:24:15.617196 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:24:15.618290 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:24:15.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.620408 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:24:15.620513 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:24:15.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.623440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:24:15.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.623513 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:24:15.631408 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:24:15.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.633158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:24:15.633245 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:24:15.634446 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:24:15.634511 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:24:15.636090 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:24:15.636134 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:24:15.639536 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:24:15.657913 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:24:15.658050 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:24:15.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:15.661920 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:24:15.665167 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:24:15.680614 systemd[1]: Switching root. Dec 13 14:24:15.703936 systemd-journald[185]: Journal stopped Dec 13 14:24:20.828090 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 14:24:20.828173 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:24:20.828193 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:24:20.828210 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:24:20.828227 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:24:20.828244 kernel: SELinux: policy capability open_perms=1 Dec 13 14:24:20.828261 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:24:20.828277 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:24:20.828297 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:24:20.828316 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:24:20.828332 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:24:20.828347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:24:20.828368 systemd[1]: Successfully loaded SELinux policy in 87.385ms. Dec 13 14:24:20.828401 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.346ms. Dec 13 14:24:20.828421 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:24:20.828446 systemd[1]: Detected virtualization amazon. Dec 13 14:24:20.828483 systemd[1]: Detected architecture x86-64. Dec 13 14:24:20.828505 systemd[1]: Detected first boot. Dec 13 14:24:20.828522 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:24:20.828543 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:24:20.828563 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:24:20.828590 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:24:20.828613 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:24:20.828636 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:24:20.828662 kernel: kauditd_printk_skb: 56 callbacks suppressed Dec 13 14:24:20.828681 kernel: audit: type=1334 audit(1734099860.505:89): prog-id=12 op=LOAD Dec 13 14:24:20.828698 kernel: audit: type=1334 audit(1734099860.505:90): prog-id=3 op=UNLOAD Dec 13 14:24:20.828714 kernel: audit: type=1334 audit(1734099860.507:91): prog-id=13 op=LOAD Dec 13 14:24:20.828732 kernel: audit: type=1334 audit(1734099860.509:92): prog-id=14 op=LOAD Dec 13 14:24:20.828752 kernel: audit: type=1334 audit(1734099860.509:93): prog-id=4 op=UNLOAD Dec 13 14:24:20.828777 kernel: audit: type=1334 audit(1734099860.509:94): prog-id=5 op=UNLOAD Dec 13 14:24:20.828797 kernel: audit: type=1334 audit(1734099860.510:95): prog-id=15 op=LOAD Dec 13 14:24:20.828815 kernel: audit: type=1334 audit(1734099860.510:96): prog-id=12 op=UNLOAD Dec 13 14:24:20.828831 kernel: audit: type=1334 audit(1734099860.511:97): prog-id=16 op=LOAD Dec 13 14:24:20.828847 kernel: audit: type=1334 audit(1734099860.515:98): prog-id=17 op=LOAD Dec 13 14:24:20.837534 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:24:20.837575 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:24:20.837596 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:24:20.837616 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:24:20.837634 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:24:20.837658 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:24:20.837676 systemd[1]: Created slice system-getty.slice. Dec 13 14:24:20.837693 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:24:20.837712 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:24:20.837731 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:24:20.837748 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:24:20.837766 systemd[1]: Created slice user.slice. Dec 13 14:24:20.837786 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:24:20.837804 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:24:20.837821 systemd[1]: Set up automount boot.automount. Dec 13 14:24:20.837839 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:24:20.837856 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:24:20.837873 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:24:20.837891 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:24:20.837909 systemd[1]: Reached target integritysetup.target. Dec 13 14:24:20.837927 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:24:20.837949 systemd[1]: Reached target remote-fs.target. Dec 13 14:24:20.837970 systemd[1]: Reached target slices.target. Dec 13 14:24:20.837989 systemd[1]: Reached target swap.target. Dec 13 14:24:20.838007 systemd[1]: Reached target torcx.target. Dec 13 14:24:20.838025 systemd[1]: Reached target veritysetup.target. Dec 13 14:24:20.838043 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:24:20.838062 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:24:20.838080 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:24:20.838098 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:24:20.838115 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:24:20.838192 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:24:20.838212 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:24:20.838231 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:24:20.838249 systemd[1]: Mounting media.mount... Dec 13 14:24:20.838268 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:20.838286 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:24:20.838304 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:24:20.838323 systemd[1]: Mounting tmp.mount... Dec 13 14:24:20.838340 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:24:20.838361 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:20.838380 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:24:20.838410 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:24:20.838429 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:20.838450 systemd[1]: Starting modprobe@drm.service... Dec 13 14:24:20.838485 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:20.838503 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:24:20.838522 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:20.838541 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:24:20.838559 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:24:20.838577 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:24:20.838595 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:24:20.838614 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:24:20.838634 systemd[1]: Stopped systemd-journald.service. Dec 13 14:24:20.838656 systemd[1]: Starting systemd-journald.service... Dec 13 14:24:20.838676 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:24:20.838696 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:24:20.838718 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:24:20.838737 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:24:20.838757 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:24:20.838776 systemd[1]: Stopped verity-setup.service. Dec 13 14:24:20.838796 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:20.838815 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:24:20.838835 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:24:20.838856 systemd[1]: Mounted media.mount. Dec 13 14:24:20.838877 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:24:20.838896 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:24:20.838915 systemd[1]: Mounted tmp.mount. Dec 13 14:24:20.838933 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:24:20.838951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:24:20.838969 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:24:20.838987 kernel: fuse: init (API version 7.34) Dec 13 14:24:20.839010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:20.839028 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:20.839046 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:24:20.839064 systemd[1]: Finished modprobe@drm.service. Dec 13 14:24:20.839083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:20.839104 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:20.839122 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:24:20.839140 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:24:20.839158 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:24:20.839178 systemd[1]: Reached target network-pre.target. Dec 13 14:24:20.839196 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:24:20.839214 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:24:20.839234 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:24:20.839253 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:20.839273 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:24:20.839291 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:24:20.839310 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:24:20.839329 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:24:20.839356 systemd-journald[1410]: Journal started Dec 13 14:24:20.839433 systemd-journald[1410]: Runtime Journal (/run/log/journal/ec2e63c84278ad4b074af4ecaed8659d) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:24:16.316000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:24:16.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:24:16.403000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:24:16.404000 audit: BPF prog-id=10 op=LOAD Dec 13 14:24:16.404000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:24:16.404000 audit: BPF prog-id=11 op=LOAD Dec 13 14:24:16.404000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:24:16.582000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:24:16.582000 audit[1333]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:16.582000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:24:16.584000 audit[1333]: AVC avc: denied { associate } for pid=1333 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:24:16.584000 audit[1333]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1316 pid=1333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:16.584000 audit: CWD cwd="/" Dec 13 14:24:16.584000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:20.842656 systemd[1]: Started systemd-journald.service. Dec 13 14:24:16.584000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:16.584000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:24:20.505000 audit: BPF prog-id=12 op=LOAD Dec 13 14:24:20.505000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:24:20.507000 audit: BPF prog-id=13 op=LOAD Dec 13 14:24:20.855505 kernel: loop: module loaded Dec 13 14:24:20.509000 audit: BPF prog-id=14 op=LOAD Dec 13 14:24:20.509000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:24:20.509000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:24:20.510000 audit: BPF prog-id=15 op=LOAD Dec 13 14:24:20.510000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:24:20.511000 audit: BPF prog-id=16 op=LOAD Dec 13 14:24:20.515000 audit: BPF prog-id=17 op=LOAD Dec 13 14:24:20.515000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:24:20.515000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:24:20.518000 audit: BPF prog-id=18 op=LOAD Dec 13 14:24:20.518000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:24:20.520000 audit: BPF prog-id=19 op=LOAD Dec 13 14:24:20.522000 audit: BPF prog-id=20 op=LOAD Dec 13 14:24:20.522000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:24:20.522000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:24:20.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.544000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:24:20.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.704000 audit: BPF prog-id=21 op=LOAD Dec 13 14:24:20.704000 audit: BPF prog-id=22 op=LOAD Dec 13 14:24:20.705000 audit: BPF prog-id=23 op=LOAD Dec 13 14:24:20.705000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:24:20.705000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:24:20.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.816000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:24:20.816000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe4eed21b0 a2=4000 a3=7ffe4eed224c items=0 ppid=1 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:20.816000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:24:20.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:16.579643 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:24:20.504583 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:24:16.580960 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:24:20.524313 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:24:16.580997 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:24:20.844505 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:24:16.581042 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:24:20.848545 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:24:16.581058 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:24:20.850847 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:24:16.581104 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:24:20.852472 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:16.581124 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:24:20.852674 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:16.581411 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:24:20.854035 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:16.581564 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:24:20.856958 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:24:16.581635 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:24:20.875738 systemd-journald[1410]: Time spent on flushing to /var/log/journal/ec2e63c84278ad4b074af4ecaed8659d is 74.220ms for 1193 entries. Dec 13 14:24:20.875738 systemd-journald[1410]: System Journal (/var/log/journal/ec2e63c84278ad4b074af4ecaed8659d) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:24:20.970585 systemd-journald[1410]: Received client request to flush runtime journal. Dec 13 14:24:20.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:16.582690 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:24:20.880675 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:24:16.582746 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:24:20.881880 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:24:16.582775 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:24:20.898849 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:24:16.582798 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:24:20.951244 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:24:16.582824 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:24:20.954035 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:24:16.582846 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:24:20.969521 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:24:20.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:19.903982 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:20.972398 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:24:19.904313 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:19.904545 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:19.905430 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:24:19.905508 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:24:19.905568 /usr/lib/systemd/system-generators/torcx-generator[1333]: time="2024-12-13T14:24:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:24:20.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:20.978095 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:24:20.984107 udevadm[1447]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:24:21.018158 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:24:21.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:21.021202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:24:21.079444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:24:21.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:21.699836 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:24:21.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:21.700000 audit: BPF prog-id=24 op=LOAD Dec 13 14:24:21.700000 audit: BPF prog-id=25 op=LOAD Dec 13 14:24:21.700000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:24:21.700000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:24:21.704147 systemd[1]: Starting systemd-udevd.service... Dec 13 14:24:21.729389 systemd-udevd[1453]: Using default interface naming scheme 'v252'. Dec 13 14:24:21.777336 systemd[1]: Started systemd-udevd.service. Dec 13 14:24:21.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:21.779000 audit: BPF prog-id=26 op=LOAD Dec 13 14:24:21.785682 systemd[1]: Starting systemd-networkd.service... Dec 13 14:24:21.805265 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:24:21.801000 audit: BPF prog-id=27 op=LOAD Dec 13 14:24:21.801000 audit: BPF prog-id=28 op=LOAD Dec 13 14:24:21.801000 audit: BPF prog-id=29 op=LOAD Dec 13 14:24:21.878353 (udev-worker)[1459]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:24:21.912242 systemd[1]: Started systemd-userdbd.service. Dec 13 14:24:21.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:21.926448 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:24:21.984480 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:24:21.988855 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:24:21.991048 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:24:21.993478 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:24:21.996000 audit[1466]: AVC avc: denied { confidentiality } for pid=1466 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:24:21.996000 audit[1466]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b2a04e9800 a1=337fc a2=7f7cc2789bc5 a3=5 items=110 ppid=1453 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:21.996000 audit: CWD cwd="/" Dec 13 14:24:21.996000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=1 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=2 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=3 name=(null) inode=15390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=4 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=5 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=6 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=7 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=8 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=9 name=(null) inode=15393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=10 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=11 name=(null) inode=15394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=12 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=13 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=14 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=15 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=16 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=17 name=(null) inode=15397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=18 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=19 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=20 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:22.047121 systemd-networkd[1464]: lo: Link UP Dec 13 14:24:22.047135 systemd-networkd[1464]: lo: Gained carrier Dec 13 14:24:21.996000 audit: PATH item=21 name=(null) inode=15399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:22.047905 systemd-networkd[1464]: Enumeration completed Dec 13 14:24:21.996000 audit: PATH item=22 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:22.048024 systemd[1]: Started systemd-networkd.service. Dec 13 14:24:22.048236 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:24:22.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:21.996000 audit: PATH item=23 name=(null) inode=15400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=24 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=25 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=26 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=27 name=(null) inode=15402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:22.050958 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:24:21.996000 audit: PATH item=28 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=29 name=(null) inode=15403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=30 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=31 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=32 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=33 name=(null) inode=15405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=34 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=35 name=(null) inode=15406 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=36 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=37 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=38 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:22.059528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:24:22.056035 systemd-networkd[1464]: eth0: Link UP Dec 13 14:24:22.056200 systemd-networkd[1464]: eth0: Gained carrier Dec 13 14:24:21.996000 audit: PATH item=39 name=(null) inode=15408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=40 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=41 name=(null) inode=15409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=42 name=(null) inode=15389 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=43 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=44 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=45 name=(null) inode=15411 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=46 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=47 name=(null) inode=15412 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=48 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=49 name=(null) inode=15413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=50 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=51 name=(null) inode=15414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=52 name=(null) inode=15410 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=53 name=(null) inode=15415 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=55 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=56 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=57 name=(null) inode=15417 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=58 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=59 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=60 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=61 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=62 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=63 name=(null) inode=15420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=64 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=65 name=(null) inode=15421 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=66 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=67 name=(null) inode=15422 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=68 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=69 name=(null) inode=15423 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=70 name=(null) inode=15419 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=71 name=(null) inode=15424 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=72 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=73 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=74 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=75 name=(null) inode=15426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=76 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=77 name=(null) inode=15427 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=78 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=79 name=(null) inode=15428 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=80 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=81 name=(null) inode=15429 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=82 name=(null) inode=15425 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=83 name=(null) inode=15430 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=84 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=85 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=86 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=87 name=(null) inode=15432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=88 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=89 name=(null) inode=15433 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=90 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=91 name=(null) inode=15434 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=92 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=93 name=(null) inode=15435 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=94 name=(null) inode=15431 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=95 name=(null) inode=15436 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=96 name=(null) inode=15416 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=97 name=(null) inode=15437 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=98 name=(null) inode=15437 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=99 name=(null) inode=15438 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=100 name=(null) inode=15437 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=101 name=(null) inode=15439 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=102 name=(null) inode=15437 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=103 name=(null) inode=15440 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=104 name=(null) inode=15437 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=105 name=(null) inode=15441 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=106 name=(null) inode=15437 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=107 name=(null) inode=15442 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PATH item=109 name=(null) inode=15443 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:24:21.996000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:24:22.068802 systemd-networkd[1464]: eth0: DHCPv4 address 172.31.21.15/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:24:22.089477 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:24:22.099439 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1460) Dec 13 14:24:22.123482 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:24:22.139682 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:24:22.231509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:24:22.352886 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:24:22.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.355967 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:24:22.382495 lvm[1567]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:24:22.420942 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:24:22.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.422290 systemd[1]: Reached target cryptsetup.target. Dec 13 14:24:22.429731 systemd[1]: Starting lvm2-activation.service... Dec 13 14:24:22.439061 lvm[1568]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:24:22.478787 systemd[1]: Finished lvm2-activation.service. Dec 13 14:24:22.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.480014 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:24:22.481072 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:24:22.481106 systemd[1]: Reached target local-fs.target. Dec 13 14:24:22.482130 systemd[1]: Reached target machines.target. Dec 13 14:24:22.484335 systemd[1]: Starting ldconfig.service... Dec 13 14:24:22.486061 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:22.486121 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:22.487310 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:24:22.489787 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:24:22.496904 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:24:22.504636 systemd[1]: Starting systemd-sysext.service... Dec 13 14:24:22.516041 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1570 (bootctl) Dec 13 14:24:22.517828 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:24:22.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.535853 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:24:22.544215 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:24:22.554287 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:24:22.554622 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:24:22.577262 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:24:22.658508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:24:22.676929 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:24:22.680132 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:24:22.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.682706 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:24:22.690418 systemd-fsck[1579]: fsck.fat 4.2 (2021-01-31) Dec 13 14:24:22.690418 systemd-fsck[1579]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:24:22.693107 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:24:22.695844 systemd[1]: Mounting boot.mount... Dec 13 14:24:22.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.713051 systemd[1]: Mounted boot.mount. Dec 13 14:24:22.717696 (sd-sysext)[1583]: Using extensions 'kubernetes'. Dec 13 14:24:22.721340 (sd-sysext)[1583]: Merged extensions into '/usr'. Dec 13 14:24:22.759985 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:22.765268 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:24:22.766930 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:22.771662 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:22.774538 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:22.777319 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:22.778235 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:22.778424 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:22.778631 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:22.784251 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:24:22.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.785995 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:24:22.787257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:22.787394 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:22.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.788656 systemd[1]: Finished systemd-sysext.service. Dec 13 14:24:22.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.790138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:22.790304 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:22.791824 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:22.791940 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:22.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:22.796312 systemd[1]: Starting ensure-sysext.service... Dec 13 14:24:22.797233 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:22.797366 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:22.802371 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:24:22.818092 systemd[1]: Reloading. Dec 13 14:24:22.839280 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:24:22.840912 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:24:22.845424 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:24:22.954866 /usr/lib/systemd/system-generators/torcx-generator[1624]: time="2024-12-13T14:24:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:24:22.955433 /usr/lib/systemd/system-generators/torcx-generator[1624]: time="2024-12-13T14:24:22Z" level=info msg="torcx already run" Dec 13 14:24:23.182236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:24:23.182258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:24:23.212503 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:24:23.257911 ldconfig[1569]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:24:23.295000 audit: BPF prog-id=30 op=LOAD Dec 13 14:24:23.295000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:24:23.296000 audit: BPF prog-id=31 op=LOAD Dec 13 14:24:23.296000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:24:23.296000 audit: BPF prog-id=32 op=LOAD Dec 13 14:24:23.297000 audit: BPF prog-id=33 op=LOAD Dec 13 14:24:23.297000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:24:23.297000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:24:23.298000 audit: BPF prog-id=34 op=LOAD Dec 13 14:24:23.299000 audit: BPF prog-id=35 op=LOAD Dec 13 14:24:23.299000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:24:23.299000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:24:23.300000 audit: BPF prog-id=36 op=LOAD Dec 13 14:24:23.300000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:24:23.300000 audit: BPF prog-id=37 op=LOAD Dec 13 14:24:23.300000 audit: BPF prog-id=38 op=LOAD Dec 13 14:24:23.300000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:24:23.300000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:24:23.304955 systemd[1]: Finished ldconfig.service. Dec 13 14:24:23.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.307647 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:24:23.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.315563 systemd[1]: Starting audit-rules.service... Dec 13 14:24:23.319768 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:24:23.323281 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:24:23.324000 audit: BPF prog-id=39 op=LOAD Dec 13 14:24:23.331000 audit: BPF prog-id=40 op=LOAD Dec 13 14:24:23.328852 systemd[1]: Starting systemd-resolved.service... Dec 13 14:24:23.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.333939 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:24:23.337586 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:24:23.344582 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:24:23.345771 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:23.351000 audit[1680]: SYSTEM_BOOT pid=1680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.359907 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:24:23.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.363554 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:23.363953 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.366836 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:23.369371 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:23.372276 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:23.373175 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.373378 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:23.373587 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:23.373703 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:23.376010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:23.376272 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:23.378047 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:23.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.378262 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:23.379584 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.382720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:23.382886 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:23.384812 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:23.395602 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:23.396014 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.398003 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:24:23.403761 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:23.407359 systemd[1]: Starting modprobe@loop.service... Dec 13 14:24:23.408244 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.408545 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:23.408718 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:23.408853 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:23.410734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:23.410958 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:23.412625 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:23.421596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:24:23.421793 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:24:23.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.423910 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:23.424384 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.426567 systemd[1]: Starting modprobe@drm.service... Dec 13 14:24:23.429279 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:24:23.430776 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.430989 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:23.431195 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:24:23.431341 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:24:23.438043 systemd[1]: Finished ensure-sysext.service. Dec 13 14:24:23.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.445288 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:24:23.445566 systemd[1]: Finished modprobe@loop.service. Dec 13 14:24:23.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.447967 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:24:23.448129 systemd[1]: Finished modprobe@drm.service. Dec 13 14:24:23.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.449433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:24:23.449816 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:24:23.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.450974 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:24:23.451011 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.468435 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:24:23.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.472080 systemd[1]: Starting systemd-update-done.service... Dec 13 14:24:23.483834 systemd[1]: Finished systemd-update-done.service. Dec 13 14:24:23.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:24:23.520396 systemd-resolved[1678]: Positive Trust Anchors: Dec 13 14:24:23.520773 systemd-resolved[1678]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:24:23.521016 systemd-resolved[1678]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:24:23.521000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:24:23.521000 audit[1704]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb27a6140 a2=420 a3=0 items=0 ppid=1675 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:24:23.521000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:24:23.523318 augenrules[1704]: No rules Dec 13 14:24:23.523917 systemd[1]: Finished audit-rules.service. Dec 13 14:24:23.534660 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:24:23.535976 systemd[1]: Reached target time-set.target. Dec 13 14:24:23.564997 systemd-resolved[1678]: Defaulting to hostname 'linux'. Dec 13 14:24:23.566971 systemd[1]: Started systemd-resolved.service. Dec 13 14:24:23.567997 systemd[1]: Reached target network.target. Dec 13 14:24:23.569019 systemd[1]: Reached target nss-lookup.target. Dec 13 14:24:23.570320 systemd[1]: Reached target sysinit.target. Dec 13 14:24:23.571428 systemd[1]: Started motdgen.path. Dec 13 14:24:23.572553 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:24:23.574003 systemd[1]: Started logrotate.timer. Dec 13 14:24:23.574875 systemd[1]: Started mdadm.timer. Dec 13 14:24:23.575669 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:24:23.576860 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:24:23.576895 systemd[1]: Reached target paths.target. Dec 13 14:24:23.577740 systemd[1]: Reached target timers.target. Dec 13 14:24:23.578979 systemd[1]: Listening on dbus.socket. Dec 13 14:24:23.580839 systemd[1]: Starting docker.socket... Dec 13 14:24:23.584979 systemd[1]: Listening on sshd.socket. Dec 13 14:24:23.586036 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:23.586807 systemd[1]: Listening on docker.socket. Dec 13 14:24:23.587996 systemd[1]: Reached target sockets.target. Dec 13 14:24:23.589094 systemd[1]: Reached target basic.target. Dec 13 14:24:23.590178 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.590200 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:24:23.591304 systemd[1]: Starting containerd.service... Dec 13 14:24:23.592592 systemd-networkd[1464]: eth0: Gained IPv6LL Dec 13 14:24:23.595927 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:24:23.598709 systemd[1]: Starting dbus.service... Dec 13 14:24:23.601413 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:24:23.606914 systemd[1]: Starting extend-filesystems.service... Dec 13 14:24:23.611142 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:24:23.615667 systemd[1]: Starting motdgen.service... Dec 13 14:24:23.653133 jq[1715]: false Dec 13 14:24:23.618623 systemd[1]: Starting prepare-helm.service... Dec 13 14:24:23.621756 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:24:23.624370 systemd[1]: Starting sshd-keygen.service... Dec 13 14:24:23.628966 systemd[1]: Starting systemd-logind.service... Dec 13 14:24:23.631602 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:24:23.631700 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:24:23.659429 jq[1724]: true Dec 13 14:24:23.632373 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:24:23.633873 systemd[1]: Starting update-engine.service... Dec 13 14:24:23.636661 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:24:23.639285 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:24:23.641978 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:24:23.642344 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:24:23.643047 systemd[1]: Reached target network-online.target. Dec 13 14:24:23.646342 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:24:23.649731 systemd[1]: Starting kubelet.service... Dec 13 14:24:23.653764 systemd[1]: Started nvidia.service. Dec 13 14:24:23.655664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:24:23.655978 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:24:23.714812 jq[1732]: true Dec 13 14:24:23.715894 systemd-timesyncd[1679]: Contacted time server 204.93.207.12:123 (0.flatcar.pool.ntp.org). Dec 13 14:24:23.716724 systemd-timesyncd[1679]: Initial clock synchronization to Fri 2024-12-13 14:24:23.694442 UTC. Dec 13 14:24:23.789410 tar[1734]: linux-amd64/helm Dec 13 14:24:23.859728 extend-filesystems[1716]: Found loop1 Dec 13 14:24:23.870474 extend-filesystems[1716]: Found nvme0n1 Dec 13 14:24:23.872260 extend-filesystems[1716]: Found nvme0n1p1 Dec 13 14:24:23.873568 extend-filesystems[1716]: Found nvme0n1p2 Dec 13 14:24:23.876018 extend-filesystems[1716]: Found nvme0n1p3 Dec 13 14:24:23.877083 extend-filesystems[1716]: Found usr Dec 13 14:24:23.880758 extend-filesystems[1716]: Found nvme0n1p4 Dec 13 14:24:23.881812 extend-filesystems[1716]: Found nvme0n1p6 Dec 13 14:24:23.882815 extend-filesystems[1716]: Found nvme0n1p7 Dec 13 14:24:23.883129 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:24:23.883344 systemd[1]: Finished motdgen.service. Dec 13 14:24:23.884689 extend-filesystems[1716]: Found nvme0n1p9 Dec 13 14:24:23.885804 extend-filesystems[1716]: Checking size of /dev/nvme0n1p9 Dec 13 14:24:23.894290 dbus-daemon[1714]: [system] SELinux support is enabled Dec 13 14:24:23.894504 systemd[1]: Started dbus.service. Dec 13 14:24:23.899265 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:24:23.899298 systemd[1]: Reached target system-config.target. Dec 13 14:24:23.909804 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:24:23.909845 systemd[1]: Reached target user-config.target. Dec 13 14:24:23.966520 dbus-daemon[1714]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1464 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:24:23.975282 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:24:23.981025 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:24:23.994693 amazon-ssm-agent[1726]: 2024/12/13 14:24:23 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:24:23.998524 extend-filesystems[1716]: Resized partition /dev/nvme0n1p9 Dec 13 14:24:24.019087 amazon-ssm-agent[1726]: Initializing new seelog logger Dec 13 14:24:24.029077 amazon-ssm-agent[1726]: New Seelog Logger Creation Complete Dec 13 14:24:24.029737 amazon-ssm-agent[1726]: 2024/12/13 14:24:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:24:24.031772 extend-filesystems[1780]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:24:24.038901 amazon-ssm-agent[1726]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:24:24.039469 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:24:24.042004 amazon-ssm-agent[1726]: 2024/12/13 14:24:24 processing appconfig overrides Dec 13 14:24:24.105118 update_engine[1723]: I1213 14:24:24.104177 1723 main.cc:92] Flatcar Update Engine starting Dec 13 14:24:24.155222 update_engine[1723]: I1213 14:24:24.110787 1723 update_check_scheduler.cc:74] Next update check in 3m23s Dec 13 14:24:24.110718 systemd[1]: Started update-engine.service. Dec 13 14:24:24.114366 systemd[1]: Started locksmithd.service. Dec 13 14:24:24.173479 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:24:24.196352 extend-filesystems[1780]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:24:24.196352 extend-filesystems[1780]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:24:24.196352 extend-filesystems[1780]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:24:24.194655 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:24:24.211535 bash[1782]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:24:24.211653 extend-filesystems[1716]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:24:24.194900 systemd[1]: Finished extend-filesystems.service. Dec 13 14:24:24.202230 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:24:24.232641 systemd-logind[1722]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:24:24.232677 systemd-logind[1722]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:24:24.232700 systemd-logind[1722]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:24:24.238475 systemd-logind[1722]: New seat seat0. Dec 13 14:24:24.251360 systemd[1]: Started systemd-logind.service. Dec 13 14:24:24.269103 env[1731]: time="2024-12-13T14:24:24.269036822Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:24:24.288827 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:24:24.362528 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:24:24.363343 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:24:24.366238 dbus-daemon[1714]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1773 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:24:24.374790 systemd[1]: Starting polkit.service... Dec 13 14:24:24.422011 polkitd[1812]: Started polkitd version 121 Dec 13 14:24:24.438299 env[1731]: time="2024-12-13T14:24:24.438245872Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:24:24.438918 env[1731]: time="2024-12-13T14:24:24.438857087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:24.445565 polkitd[1812]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:24:24.445773 polkitd[1812]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:24:24.446119 env[1731]: time="2024-12-13T14:24:24.446074182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:24:24.446653 env[1731]: time="2024-12-13T14:24:24.446625664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:24.448086 polkitd[1812]: Finished loading, compiling and executing 2 rules Dec 13 14:24:24.448794 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:24:24.449256 systemd[1]: Started polkit.service. Dec 13 14:24:24.450636 polkitd[1812]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:24:24.456343 env[1731]: time="2024-12-13T14:24:24.456276013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:24:24.460413 env[1731]: time="2024-12-13T14:24:24.460364118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:24.460703 env[1731]: time="2024-12-13T14:24:24.460648864Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:24:24.460805 env[1731]: time="2024-12-13T14:24:24.460787227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:24.461284 env[1731]: time="2024-12-13T14:24:24.461261302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:24.462050 env[1731]: time="2024-12-13T14:24:24.462026845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:24:24.462721 env[1731]: time="2024-12-13T14:24:24.462692078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:24:24.463177 env[1731]: time="2024-12-13T14:24:24.463154789Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:24:24.463566 env[1731]: time="2024-12-13T14:24:24.463542170Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:24:24.463967 env[1731]: time="2024-12-13T14:24:24.463944265Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:24:24.481675 env[1731]: time="2024-12-13T14:24:24.481632627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:24:24.481842 env[1731]: time="2024-12-13T14:24:24.481824557Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:24:24.481919 env[1731]: time="2024-12-13T14:24:24.481905505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:24:24.482133 env[1731]: time="2024-12-13T14:24:24.482105638Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482265 env[1731]: time="2024-12-13T14:24:24.482250362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482358 env[1731]: time="2024-12-13T14:24:24.482343088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482431 env[1731]: time="2024-12-13T14:24:24.482418006Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482525 env[1731]: time="2024-12-13T14:24:24.482510723Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482597 env[1731]: time="2024-12-13T14:24:24.482584683Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482667 env[1731]: time="2024-12-13T14:24:24.482654489Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482770 env[1731]: time="2024-12-13T14:24:24.482723646Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.482845 env[1731]: time="2024-12-13T14:24:24.482831786Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:24:24.483139 env[1731]: time="2024-12-13T14:24:24.483111378Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:24:24.483328 env[1731]: time="2024-12-13T14:24:24.483313457Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:24:24.484042 env[1731]: time="2024-12-13T14:24:24.483953534Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:24:24.484171 env[1731]: time="2024-12-13T14:24:24.484153370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.484363 env[1731]: time="2024-12-13T14:24:24.484340140Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:24:24.484529 env[1731]: time="2024-12-13T14:24:24.484512423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.484918 env[1731]: time="2024-12-13T14:24:24.484677503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485050 env[1731]: time="2024-12-13T14:24:24.485028284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485155 env[1731]: time="2024-12-13T14:24:24.485137713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485256 env[1731]: time="2024-12-13T14:24:24.485239103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485359 env[1731]: time="2024-12-13T14:24:24.485343008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485479 env[1731]: time="2024-12-13T14:24:24.485442101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485595 env[1731]: time="2024-12-13T14:24:24.485580735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.485704 env[1731]: time="2024-12-13T14:24:24.485689452Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:24:24.486124 env[1731]: time="2024-12-13T14:24:24.486102357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.486252 env[1731]: time="2024-12-13T14:24:24.486232536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.486358 env[1731]: time="2024-12-13T14:24:24.486342461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.486467 env[1731]: time="2024-12-13T14:24:24.486440804Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:24:24.486572 env[1731]: time="2024-12-13T14:24:24.486545205Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:24:24.486671 env[1731]: time="2024-12-13T14:24:24.486654755Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:24:24.486771 env[1731]: time="2024-12-13T14:24:24.486755740Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:24:24.487291 env[1731]: time="2024-12-13T14:24:24.487259463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:24:24.487907 env[1731]: time="2024-12-13T14:24:24.487815692Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:24:24.491774 env[1731]: time="2024-12-13T14:24:24.489894381Z" level=info msg="Connect containerd service" Dec 13 14:24:24.491774 env[1731]: time="2024-12-13T14:24:24.490024783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:24:24.491774 env[1731]: time="2024-12-13T14:24:24.491556494Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:24:24.491774 env[1731]: time="2024-12-13T14:24:24.491636467Z" level=info msg="Start subscribing containerd event" Dec 13 14:24:24.491774 env[1731]: time="2024-12-13T14:24:24.491684110Z" level=info msg="Start recovering state" Dec 13 14:24:24.496583 env[1731]: time="2024-12-13T14:24:24.496552003Z" level=info msg="Start event monitor" Dec 13 14:24:24.496959 env[1731]: time="2024-12-13T14:24:24.496936865Z" level=info msg="Start snapshots syncer" Dec 13 14:24:24.498255 systemd-hostnamed[1773]: Hostname set to (transient) Dec 13 14:24:24.498366 systemd-resolved[1678]: System hostname changed to 'ip-172-31-21-15'. Dec 13 14:24:24.499045 env[1731]: time="2024-12-13T14:24:24.499023428Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:24:24.499143 env[1731]: time="2024-12-13T14:24:24.499127360Z" level=info msg="Start streaming server" Dec 13 14:24:24.499927 env[1731]: time="2024-12-13T14:24:24.499895939Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:24:24.505782 env[1731]: time="2024-12-13T14:24:24.505753996Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:24:24.506065 systemd[1]: Started containerd.service. Dec 13 14:24:24.533564 env[1731]: time="2024-12-13T14:24:24.533521059Z" level=info msg="containerd successfully booted in 0.316097s" Dec 13 14:24:24.628231 coreos-metadata[1713]: Dec 13 14:24:24.627 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:24:24.634604 coreos-metadata[1713]: Dec 13 14:24:24.634 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:24:24.636769 coreos-metadata[1713]: Dec 13 14:24:24.635 INFO Fetch successful Dec 13 14:24:24.636949 coreos-metadata[1713]: Dec 13 14:24:24.636 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:24:24.638312 coreos-metadata[1713]: Dec 13 14:24:24.638 INFO Fetch successful Dec 13 14:24:24.642350 unknown[1713]: wrote ssh authorized keys file for user: core Dec 13 14:24:24.690627 update-ssh-keys[1882]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:24:24.692290 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:24:24.955236 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Create new startup processor Dec 13 14:24:24.959521 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:24:24.959917 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing bookkeeping folders Dec 13 14:24:24.960018 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO removing the completed state files Dec 13 14:24:24.960096 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:24:24.960161 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:24:24.960232 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing healthcheck folders for long running plugins Dec 13 14:24:24.960307 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing locations for inventory plugin Dec 13 14:24:24.960380 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing default location for custom inventory Dec 13 14:24:24.960611 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing default location for file inventory Dec 13 14:24:24.960708 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Initializing default location for role inventory Dec 13 14:24:24.960782 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Init the cloudwatchlogs publisher Dec 13 14:24:24.960856 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:24:24.960920 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:24:24.960991 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:24:24.961056 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:24:24.961187 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:24:24.961263 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:24:24.961326 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:24:24.961576 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:24:24.961662 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:24:24.961739 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:24:24.961859 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:24:24.961941 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO OS: linux, Arch: amd64 Dec 13 14:24:24.970818 amazon-ssm-agent[1726]: datastore file /var/lib/amazon/ssm/i-0b6f0e20336d665c4/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:24:25.059613 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:24:25.154262 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:24:25.248722 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:24:25.343162 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:24:25.437867 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:24:25.533095 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [instanceID=i-0b6f0e20336d665c4] Starting association polling Dec 13 14:24:25.608525 tar[1734]: linux-amd64/LICENSE Dec 13 14:24:25.608927 tar[1734]: linux-amd64/README.md Dec 13 14:24:25.615916 systemd[1]: Finished prepare-helm.service. Dec 13 14:24:25.629475 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:24:25.724786 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:24:25.820300 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:24:25.852841 locksmithd[1792]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:24:25.916545 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:24:26.012390 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:24:26.077270 systemd[1]: Started kubelet.service. Dec 13 14:24:26.108496 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:24:26.202951 sshd_keygen[1751]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:24:26.205187 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:24:26.236912 systemd[1]: Finished sshd-keygen.service. Dec 13 14:24:26.239891 systemd[1]: Starting issuegen.service... Dec 13 14:24:26.251293 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:24:26.251523 systemd[1]: Finished issuegen.service. Dec 13 14:24:26.254691 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:24:26.269344 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:24:26.273287 systemd[1]: Started getty@tty1.service. Dec 13 14:24:26.277957 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:24:26.279472 systemd[1]: Reached target getty.target. Dec 13 14:24:26.282004 systemd[1]: Reached target multi-user.target. Dec 13 14:24:26.286792 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:24:26.301265 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:24:26.310801 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:24:26.311155 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:24:26.313827 systemd[1]: Startup finished in 778ms (kernel) + 8.294s (initrd) + 10.113s (userspace) = 19.186s. Dec 13 14:24:26.397965 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0b6f0e20336d665c4, requestId: 56b6a4bc-7443-4b32-a89f-98c3644fc3f2 Dec 13 14:24:26.495418 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [OfflineService] Starting document processing engine... Dec 13 14:24:26.592492 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:24:26.689853 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:24:26.787155 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [OfflineService] Starting message polling Dec 13 14:24:26.884817 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [OfflineService] Starting send replies to MDS Dec 13 14:24:26.982758 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:24:27.080741 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:24:27.178927 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [MessageGatewayService] listening reply. Dec 13 14:24:27.184251 kubelet[1914]: E1213 14:24:27.184178 1914 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:24:27.186578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:24:27.186756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:24:27.187177 systemd[1]: kubelet.service: Consumed 1.169s CPU time. Dec 13 14:24:27.277745 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:24:27.376435 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:24:27.475314 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:24:27.574494 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:24:27.673647 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:24:27.773069 amazon-ssm-agent[1726]: 2024-12-13 14:24:24 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:24:27.872774 amazon-ssm-agent[1726]: 2024-12-13 14:24:25 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b6f0e20336d665c4?role=subscribe&stream=input Dec 13 14:24:27.972546 amazon-ssm-agent[1726]: 2024-12-13 14:24:25 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b6f0e20336d665c4?role=subscribe&stream=input Dec 13 14:24:28.072509 amazon-ssm-agent[1726]: 2024-12-13 14:24:25 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:24:28.172769 amazon-ssm-agent[1726]: 2024-12-13 14:24:25 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:24:32.637323 systemd[1]: Created slice system-sshd.slice. Dec 13 14:24:32.638785 systemd[1]: Started sshd@0-172.31.21.15:22-139.178.89.65:60912.service. Dec 13 14:24:32.824725 sshd[1935]: Accepted publickey for core from 139.178.89.65 port 60912 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:24:32.829171 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:32.844281 systemd[1]: Created slice user-500.slice. Dec 13 14:24:32.846077 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:24:32.848606 systemd-logind[1722]: New session 1 of user core. Dec 13 14:24:32.858643 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:24:32.860750 systemd[1]: Starting user@500.service... Dec 13 14:24:32.865415 (systemd)[1938]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:32.979475 systemd[1938]: Queued start job for default target default.target. Dec 13 14:24:32.980132 systemd[1938]: Reached target paths.target. Dec 13 14:24:32.980166 systemd[1938]: Reached target sockets.target. Dec 13 14:24:32.980184 systemd[1938]: Reached target timers.target. Dec 13 14:24:32.980201 systemd[1938]: Reached target basic.target. Dec 13 14:24:32.980259 systemd[1938]: Reached target default.target. Dec 13 14:24:32.980299 systemd[1938]: Startup finished in 107ms. Dec 13 14:24:32.980661 systemd[1]: Started user@500.service. Dec 13 14:24:32.982115 systemd[1]: Started session-1.scope. Dec 13 14:24:33.127609 systemd[1]: Started sshd@1-172.31.21.15:22-139.178.89.65:60920.service. Dec 13 14:24:33.293365 sshd[1947]: Accepted publickey for core from 139.178.89.65 port 60920 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:24:33.294808 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:33.301553 systemd-logind[1722]: New session 2 of user core. Dec 13 14:24:33.302665 systemd[1]: Started session-2.scope. Dec 13 14:24:33.425755 sshd[1947]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:33.428932 systemd[1]: sshd@1-172.31.21.15:22-139.178.89.65:60920.service: Deactivated successfully. Dec 13 14:24:33.429818 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:24:33.430597 systemd-logind[1722]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:24:33.431448 systemd-logind[1722]: Removed session 2. Dec 13 14:24:33.451312 systemd[1]: Started sshd@2-172.31.21.15:22-139.178.89.65:60932.service. Dec 13 14:24:33.619400 sshd[1953]: Accepted publickey for core from 139.178.89.65 port 60932 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:24:33.620367 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:33.627130 systemd[1]: Started session-3.scope. Dec 13 14:24:33.627883 systemd-logind[1722]: New session 3 of user core. Dec 13 14:24:33.750868 sshd[1953]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:33.754241 systemd[1]: sshd@2-172.31.21.15:22-139.178.89.65:60932.service: Deactivated successfully. Dec 13 14:24:33.755367 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:24:33.756186 systemd-logind[1722]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:24:33.761093 systemd-logind[1722]: Removed session 3. Dec 13 14:24:33.774925 systemd[1]: Started sshd@3-172.31.21.15:22-139.178.89.65:60944.service. Dec 13 14:24:33.938216 sshd[1959]: Accepted publickey for core from 139.178.89.65 port 60944 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:24:33.939179 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:33.945484 systemd[1]: Started session-4.scope. Dec 13 14:24:33.946135 systemd-logind[1722]: New session 4 of user core. Dec 13 14:24:34.073583 sshd[1959]: pam_unix(sshd:session): session closed for user core Dec 13 14:24:34.077008 systemd[1]: sshd@3-172.31.21.15:22-139.178.89.65:60944.service: Deactivated successfully. Dec 13 14:24:34.077859 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:24:34.078557 systemd-logind[1722]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:24:34.079436 systemd-logind[1722]: Removed session 4. Dec 13 14:24:34.099102 systemd[1]: Started sshd@4-172.31.21.15:22-139.178.89.65:60956.service. Dec 13 14:24:34.263616 sshd[1965]: Accepted publickey for core from 139.178.89.65 port 60956 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:24:34.264665 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:24:34.271352 systemd[1]: Started session-5.scope. Dec 13 14:24:34.272213 systemd-logind[1722]: New session 5 of user core. Dec 13 14:24:34.400361 sudo[1968]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:24:34.400867 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:24:34.428774 systemd[1]: Starting docker.service... Dec 13 14:24:34.500827 env[1978]: time="2024-12-13T14:24:34.500564519Z" level=info msg="Starting up" Dec 13 14:24:34.505322 env[1978]: time="2024-12-13T14:24:34.505040326Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:24:34.505322 env[1978]: time="2024-12-13T14:24:34.505067406Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:24:34.505322 env[1978]: time="2024-12-13T14:24:34.505102101Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:24:34.505322 env[1978]: time="2024-12-13T14:24:34.505163333Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:24:34.507040 env[1978]: time="2024-12-13T14:24:34.506947732Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:24:34.507040 env[1978]: time="2024-12-13T14:24:34.507036361Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:24:34.507256 env[1978]: time="2024-12-13T14:24:34.507059662Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:24:34.507256 env[1978]: time="2024-12-13T14:24:34.507073504Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:24:34.514535 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport493022155-merged.mount: Deactivated successfully. Dec 13 14:24:34.536675 amazon-ssm-agent[1726]: 2024-12-13 14:24:34 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:24:34.559988 env[1978]: time="2024-12-13T14:24:34.559949782Z" level=info msg="Loading containers: start." Dec 13 14:24:34.743477 kernel: Initializing XFRM netlink socket Dec 13 14:24:34.789502 env[1978]: time="2024-12-13T14:24:34.789042257Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:24:34.791639 (udev-worker)[1987]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:24:34.951388 systemd-networkd[1464]: docker0: Link UP Dec 13 14:24:34.972358 env[1978]: time="2024-12-13T14:24:34.972316756Z" level=info msg="Loading containers: done." Dec 13 14:24:34.996827 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1734039324-merged.mount: Deactivated successfully. Dec 13 14:24:35.007092 env[1978]: time="2024-12-13T14:24:35.007050337Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:24:35.007519 env[1978]: time="2024-12-13T14:24:35.007490387Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:24:35.007636 env[1978]: time="2024-12-13T14:24:35.007619996Z" level=info msg="Daemon has completed initialization" Dec 13 14:24:35.027344 systemd[1]: Started docker.service. Dec 13 14:24:35.042391 env[1978]: time="2024-12-13T14:24:35.041362790Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:24:36.294089 env[1731]: time="2024-12-13T14:24:36.293910622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:24:36.894572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145235839.mount: Deactivated successfully. Dec 13 14:24:37.290181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:24:37.290567 systemd[1]: Stopped kubelet.service. Dec 13 14:24:37.290625 systemd[1]: kubelet.service: Consumed 1.169s CPU time. Dec 13 14:24:37.292729 systemd[1]: Starting kubelet.service... Dec 13 14:24:37.577249 systemd[1]: Started kubelet.service. Dec 13 14:24:37.686025 kubelet[2110]: E1213 14:24:37.685965 2110 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:24:37.690918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:24:37.691093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:24:39.126804 env[1731]: time="2024-12-13T14:24:39.126745735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:39.129929 env[1731]: time="2024-12-13T14:24:39.129863524Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:39.132234 env[1731]: time="2024-12-13T14:24:39.132195842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:39.134385 env[1731]: time="2024-12-13T14:24:39.134345346Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:39.135204 env[1731]: time="2024-12-13T14:24:39.135165839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:24:39.168008 env[1731]: time="2024-12-13T14:24:39.167966004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:24:41.740225 env[1731]: time="2024-12-13T14:24:41.740167101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:41.742569 env[1731]: time="2024-12-13T14:24:41.742527829Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:41.744675 env[1731]: time="2024-12-13T14:24:41.744517038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:41.746852 env[1731]: time="2024-12-13T14:24:41.746812553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:41.747624 env[1731]: time="2024-12-13T14:24:41.747586132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:24:41.765345 env[1731]: time="2024-12-13T14:24:41.765301097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:24:43.211277 env[1731]: time="2024-12-13T14:24:43.211223382Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:43.213791 env[1731]: time="2024-12-13T14:24:43.213745483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:43.222448 env[1731]: time="2024-12-13T14:24:43.222403079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:43.225212 env[1731]: time="2024-12-13T14:24:43.225165849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:43.227506 env[1731]: time="2024-12-13T14:24:43.227417721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:24:43.270919 env[1731]: time="2024-12-13T14:24:43.270733438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:24:44.636115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277082171.mount: Deactivated successfully. Dec 13 14:24:45.533483 env[1731]: time="2024-12-13T14:24:45.533260998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:45.535622 env[1731]: time="2024-12-13T14:24:45.535499917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:45.537668 env[1731]: time="2024-12-13T14:24:45.537449867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:45.539512 env[1731]: time="2024-12-13T14:24:45.539477889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:45.540385 env[1731]: time="2024-12-13T14:24:45.540347525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:24:45.554987 env[1731]: time="2024-12-13T14:24:45.554933649Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:24:46.110639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248476829.mount: Deactivated successfully. Dec 13 14:24:47.280128 env[1731]: time="2024-12-13T14:24:47.280077771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.283140 env[1731]: time="2024-12-13T14:24:47.283092271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.285075 env[1731]: time="2024-12-13T14:24:47.285037482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.286992 env[1731]: time="2024-12-13T14:24:47.286950820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.287861 env[1731]: time="2024-12-13T14:24:47.287822690Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:24:47.304009 env[1731]: time="2024-12-13T14:24:47.303962497Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:24:47.789812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:24:47.790082 systemd[1]: Stopped kubelet.service. Dec 13 14:24:47.791818 systemd[1]: Starting kubelet.service... Dec 13 14:24:47.820956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121686296.mount: Deactivated successfully. Dec 13 14:24:47.831403 env[1731]: time="2024-12-13T14:24:47.831343846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.835689 env[1731]: time="2024-12-13T14:24:47.835091247Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.845168 env[1731]: time="2024-12-13T14:24:47.845124301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.851258 env[1731]: time="2024-12-13T14:24:47.851208533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:47.851906 env[1731]: time="2024-12-13T14:24:47.851862781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:24:47.869689 env[1731]: time="2024-12-13T14:24:47.869331339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:24:47.977384 systemd[1]: Started kubelet.service. Dec 13 14:24:48.077009 kubelet[2151]: E1213 14:24:48.076879 2151 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:24:48.079640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:24:48.079808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:24:48.370936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535487510.mount: Deactivated successfully. Dec 13 14:24:50.900687 env[1731]: time="2024-12-13T14:24:50.900632812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:50.903176 env[1731]: time="2024-12-13T14:24:50.903131290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:50.905559 env[1731]: time="2024-12-13T14:24:50.905519768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:50.907585 env[1731]: time="2024-12-13T14:24:50.907546130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:50.908517 env[1731]: time="2024-12-13T14:24:50.908469442Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:24:54.327806 systemd[1]: Stopped kubelet.service. Dec 13 14:24:54.332181 systemd[1]: Starting kubelet.service... Dec 13 14:24:54.364372 systemd[1]: Reloading. Dec 13 14:24:54.509892 /usr/lib/systemd/system-generators/torcx-generator[2246]: time="2024-12-13T14:24:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:24:54.509934 /usr/lib/systemd/system-generators/torcx-generator[2246]: time="2024-12-13T14:24:54Z" level=info msg="torcx already run" Dec 13 14:24:54.664162 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:24:54.664209 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:24:54.705608 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:24:54.852265 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:24:54.873899 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:24:54.874003 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:24:54.874280 systemd[1]: Stopped kubelet.service. Dec 13 14:24:54.876435 systemd[1]: Starting kubelet.service... Dec 13 14:24:55.519285 systemd[1]: Started kubelet.service. Dec 13 14:24:55.594153 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:24:55.594839 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:24:55.594966 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:24:55.595343 kubelet[2302]: I1213 14:24:55.595291 2302 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:24:56.272806 kubelet[2302]: I1213 14:24:56.272537 2302 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:24:56.272806 kubelet[2302]: I1213 14:24:56.272569 2302 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:24:56.273026 kubelet[2302]: I1213 14:24:56.272859 2302 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:24:56.373031 kubelet[2302]: I1213 14:24:56.372982 2302 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:24:56.381885 kubelet[2302]: E1213 14:24:56.381850 2302 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.393784 kubelet[2302]: I1213 14:24:56.393753 2302 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:24:56.394056 kubelet[2302]: I1213 14:24:56.394033 2302 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:24:56.394257 kubelet[2302]: I1213 14:24:56.394236 2302 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:24:56.394404 kubelet[2302]: I1213 14:24:56.394269 2302 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:24:56.394404 kubelet[2302]: I1213 14:24:56.394282 2302 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:24:56.394524 kubelet[2302]: I1213 14:24:56.394422 2302 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:24:56.394570 kubelet[2302]: I1213 14:24:56.394557 2302 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:24:56.394612 kubelet[2302]: I1213 14:24:56.394577 2302 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:24:56.394612 kubelet[2302]: I1213 14:24:56.394609 2302 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:24:56.394692 kubelet[2302]: I1213 14:24:56.394628 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:24:56.398567 kubelet[2302]: I1213 14:24:56.398537 2302 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:24:56.411372 kubelet[2302]: W1213 14:24:56.411220 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-15&limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.411587 kubelet[2302]: E1213 14:24:56.411387 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-15&limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.411587 kubelet[2302]: W1213 14:24:56.411494 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.411587 kubelet[2302]: E1213 14:24:56.411536 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.412801 kubelet[2302]: I1213 14:24:56.412763 2302 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:24:56.412903 kubelet[2302]: W1213 14:24:56.412841 2302 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:24:56.413597 kubelet[2302]: I1213 14:24:56.413576 2302 server.go:1256] "Started kubelet" Dec 13 14:24:56.416637 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:24:56.416752 kubelet[2302]: I1213 14:24:56.416380 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:24:56.424081 kubelet[2302]: I1213 14:24:56.424050 2302 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:24:56.425689 kubelet[2302]: I1213 14:24:56.425076 2302 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:24:56.426178 kubelet[2302]: E1213 14:24:56.426158 2302 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.15:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-15.1810c2ab0ac573bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-15,UID:ip-172-31-21-15,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-15,},FirstTimestamp:2024-12-13 14:24:56.4135495 +0000 UTC m=+0.885341658,LastTimestamp:2024-12-13 14:24:56.4135495 +0000 UTC m=+0.885341658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-15,}" Dec 13 14:24:56.427079 kubelet[2302]: I1213 14:24:56.426656 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:24:56.427079 kubelet[2302]: I1213 14:24:56.426875 2302 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:24:56.427717 kubelet[2302]: I1213 14:24:56.427702 2302 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:24:56.427954 kubelet[2302]: I1213 14:24:56.427939 2302 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:24:56.428128 kubelet[2302]: I1213 14:24:56.428107 2302 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:24:56.429770 kubelet[2302]: W1213 14:24:56.428717 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.429916 kubelet[2302]: E1213 14:24:56.429904 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.431542 kubelet[2302]: E1213 14:24:56.431526 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-15?timeout=10s\": dial tcp 172.31.21.15:6443: connect: connection refused" interval="200ms" Dec 13 14:24:56.432100 kubelet[2302]: E1213 14:24:56.432081 2302 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:24:56.432527 kubelet[2302]: I1213 14:24:56.432511 2302 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:24:56.433131 kubelet[2302]: I1213 14:24:56.433108 2302 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:24:56.435597 kubelet[2302]: I1213 14:24:56.435575 2302 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:24:56.458394 kubelet[2302]: I1213 14:24:56.458367 2302 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:24:56.458603 kubelet[2302]: I1213 14:24:56.458592 2302 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:24:56.458682 kubelet[2302]: I1213 14:24:56.458674 2302 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:24:56.463253 kubelet[2302]: I1213 14:24:56.463224 2302 policy_none.go:49] "None policy: Start" Dec 13 14:24:56.464532 kubelet[2302]: I1213 14:24:56.464391 2302 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:24:56.464728 kubelet[2302]: I1213 14:24:56.464713 2302 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:24:56.467314 kubelet[2302]: I1213 14:24:56.467282 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:24:56.468605 kubelet[2302]: I1213 14:24:56.468524 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:24:56.468605 kubelet[2302]: I1213 14:24:56.468557 2302 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:24:56.468605 kubelet[2302]: I1213 14:24:56.468578 2302 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:24:56.468865 kubelet[2302]: E1213 14:24:56.468641 2302 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:24:56.471250 kubelet[2302]: W1213 14:24:56.471204 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.471250 kubelet[2302]: E1213 14:24:56.471257 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:56.477414 systemd[1]: Created slice kubepods.slice. Dec 13 14:24:56.487626 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:24:56.496139 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:24:56.498358 kubelet[2302]: I1213 14:24:56.498335 2302 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:24:56.498625 kubelet[2302]: I1213 14:24:56.498606 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:24:56.501176 kubelet[2302]: E1213 14:24:56.501161 2302 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-15\" not found" Dec 13 14:24:56.532017 kubelet[2302]: I1213 14:24:56.530181 2302 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:24:56.532017 kubelet[2302]: E1213 14:24:56.530653 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.15:6443/api/v1/nodes\": dial tcp 172.31.21.15:6443: connect: connection refused" node="ip-172-31-21-15" Dec 13 14:24:56.568846 kubelet[2302]: I1213 14:24:56.568796 2302 topology_manager.go:215] "Topology Admit Handler" podUID="313d9c80a907aceb685b4c2c373e6585" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-15" Dec 13 14:24:56.572898 kubelet[2302]: I1213 14:24:56.572869 2302 topology_manager.go:215] "Topology Admit Handler" podUID="8c8cb32959fe7bd73cf4bd7b22a432ae" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-15" Dec 13 14:24:56.574267 kubelet[2302]: I1213 14:24:56.574149 2302 topology_manager.go:215] "Topology Admit Handler" podUID="f13db0b255eb7d0314af776124c7c781" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-15" Dec 13 14:24:56.583158 systemd[1]: Created slice kubepods-burstable-pod8c8cb32959fe7bd73cf4bd7b22a432ae.slice. Dec 13 14:24:56.595392 systemd[1]: Created slice kubepods-burstable-pod313d9c80a907aceb685b4c2c373e6585.slice. Dec 13 14:24:56.600517 systemd[1]: Created slice kubepods-burstable-podf13db0b255eb7d0314af776124c7c781.slice. Dec 13 14:24:56.632292 kubelet[2302]: E1213 14:24:56.632260 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-15?timeout=10s\": dial tcp 172.31.21.15:6443: connect: connection refused" interval="400ms" Dec 13 14:24:56.729797 kubelet[2302]: I1213 14:24:56.729750 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/313d9c80a907aceb685b4c2c373e6585-ca-certs\") pod \"kube-apiserver-ip-172-31-21-15\" (UID: \"313d9c80a907aceb685b4c2c373e6585\") " pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:24:56.729961 kubelet[2302]: I1213 14:24:56.729827 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:24:56.729961 kubelet[2302]: I1213 14:24:56.729860 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:24:56.729961 kubelet[2302]: I1213 14:24:56.729902 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/313d9c80a907aceb685b4c2c373e6585-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-15\" (UID: \"313d9c80a907aceb685b4c2c373e6585\") " pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:24:56.729961 kubelet[2302]: I1213 14:24:56.729933 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/313d9c80a907aceb685b4c2c373e6585-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-15\" (UID: \"313d9c80a907aceb685b4c2c373e6585\") " pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:24:56.730164 kubelet[2302]: I1213 14:24:56.729974 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:24:56.730164 kubelet[2302]: I1213 14:24:56.730006 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:24:56.730164 kubelet[2302]: I1213 14:24:56.730050 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:24:56.730164 kubelet[2302]: I1213 14:24:56.730083 2302 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13db0b255eb7d0314af776124c7c781-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-15\" (UID: \"f13db0b255eb7d0314af776124c7c781\") " pod="kube-system/kube-scheduler-ip-172-31-21-15" Dec 13 14:24:56.732275 kubelet[2302]: I1213 14:24:56.732238 2302 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:24:56.732680 kubelet[2302]: E1213 14:24:56.732657 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.15:6443/api/v1/nodes\": dial tcp 172.31.21.15:6443: connect: connection refused" node="ip-172-31-21-15" Dec 13 14:24:56.893530 env[1731]: time="2024-12-13T14:24:56.893474378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-15,Uid:8c8cb32959fe7bd73cf4bd7b22a432ae,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:56.899355 env[1731]: time="2024-12-13T14:24:56.899302065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-15,Uid:313d9c80a907aceb685b4c2c373e6585,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:56.903567 env[1731]: time="2024-12-13T14:24:56.903522763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-15,Uid:f13db0b255eb7d0314af776124c7c781,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:57.034110 kubelet[2302]: E1213 14:24:57.034047 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-15?timeout=10s\": dial tcp 172.31.21.15:6443: connect: connection refused" interval="800ms" Dec 13 14:24:57.134951 kubelet[2302]: I1213 14:24:57.134916 2302 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:24:57.135325 kubelet[2302]: E1213 14:24:57.135299 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.15:6443/api/v1/nodes\": dial tcp 172.31.21.15:6443: connect: connection refused" node="ip-172-31-21-15" Dec 13 14:24:57.306533 kubelet[2302]: W1213 14:24:57.306400 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.306739 kubelet[2302]: E1213 14:24:57.306563 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.407794 kubelet[2302]: W1213 14:24:57.407725 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-15&limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.407794 kubelet[2302]: E1213 14:24:57.407801 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-15&limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.422410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924908536.mount: Deactivated successfully. Dec 13 14:24:57.440057 env[1731]: time="2024-12-13T14:24:57.440008282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.442259 env[1731]: time="2024-12-13T14:24:57.442217646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.449041 env[1731]: time="2024-12-13T14:24:57.448989486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.451685 env[1731]: time="2024-12-13T14:24:57.451638868Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.453166 env[1731]: time="2024-12-13T14:24:57.452981235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.455364 env[1731]: time="2024-12-13T14:24:57.455323460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.459191 env[1731]: time="2024-12-13T14:24:57.459145995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.461080 env[1731]: time="2024-12-13T14:24:57.461038637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.465189 env[1731]: time="2024-12-13T14:24:57.465146870Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.467591 env[1731]: time="2024-12-13T14:24:57.467553315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.469420 env[1731]: time="2024-12-13T14:24:57.469377391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.471556 env[1731]: time="2024-12-13T14:24:57.471519502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:57.492337 kubelet[2302]: W1213 14:24:57.492301 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.492337 kubelet[2302]: E1213 14:24:57.492344 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.589008 env[1731]: time="2024-12-13T14:24:57.588640208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:57.589008 env[1731]: time="2024-12-13T14:24:57.588696094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:57.589008 env[1731]: time="2024-12-13T14:24:57.588712543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:57.591519 env[1731]: time="2024-12-13T14:24:57.589911345Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575 pid=2341 runtime=io.containerd.runc.v2 Dec 13 14:24:57.598078 env[1731]: time="2024-12-13T14:24:57.597932913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:57.598078 env[1731]: time="2024-12-13T14:24:57.597981703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:57.598368 env[1731]: time="2024-12-13T14:24:57.598065079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:57.598516 env[1731]: time="2024-12-13T14:24:57.598347685Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c pid=2361 runtime=io.containerd.runc.v2 Dec 13 14:24:57.600065 env[1731]: time="2024-12-13T14:24:57.599985225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:57.600429 env[1731]: time="2024-12-13T14:24:57.600393078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:57.600682 env[1731]: time="2024-12-13T14:24:57.600606330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:57.601282 env[1731]: time="2024-12-13T14:24:57.601216659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/abe9e81570dfaf37ba189400b6d66ab6a3f91602b67a8e05d90762030868869b pid=2360 runtime=io.containerd.runc.v2 Dec 13 14:24:57.623738 systemd[1]: Started cri-containerd-efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575.scope. Dec 13 14:24:57.664331 systemd[1]: Started cri-containerd-54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c.scope. Dec 13 14:24:57.679739 systemd[1]: Started cri-containerd-abe9e81570dfaf37ba189400b6d66ab6a3f91602b67a8e05d90762030868869b.scope. Dec 13 14:24:57.680701 kubelet[2302]: W1213 14:24:57.680607 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.680701 kubelet[2302]: E1213 14:24:57.680682 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:57.819533 env[1731]: time="2024-12-13T14:24:57.819446944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-15,Uid:8c8cb32959fe7bd73cf4bd7b22a432ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575\"" Dec 13 14:24:57.829968 env[1731]: time="2024-12-13T14:24:57.829925704Z" level=info msg="CreateContainer within sandbox \"efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:24:57.835572 kubelet[2302]: E1213 14:24:57.835537 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-15?timeout=10s\": dial tcp 172.31.21.15:6443: connect: connection refused" interval="1.6s" Dec 13 14:24:57.842497 env[1731]: time="2024-12-13T14:24:57.841269777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-15,Uid:f13db0b255eb7d0314af776124c7c781,Namespace:kube-system,Attempt:0,} returns sandbox id \"54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c\"" Dec 13 14:24:57.848019 env[1731]: time="2024-12-13T14:24:57.847875559Z" level=info msg="CreateContainer within sandbox \"54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:24:57.850698 env[1731]: time="2024-12-13T14:24:57.849913432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-15,Uid:313d9c80a907aceb685b4c2c373e6585,Namespace:kube-system,Attempt:0,} returns sandbox id \"abe9e81570dfaf37ba189400b6d66ab6a3f91602b67a8e05d90762030868869b\"" Dec 13 14:24:57.854295 env[1731]: time="2024-12-13T14:24:57.854255968Z" level=info msg="CreateContainer within sandbox \"abe9e81570dfaf37ba189400b6d66ab6a3f91602b67a8e05d90762030868869b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:24:57.911613 env[1731]: time="2024-12-13T14:24:57.911551080Z" level=info msg="CreateContainer within sandbox \"54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49\"" Dec 13 14:24:57.914859 env[1731]: time="2024-12-13T14:24:57.914810494Z" level=info msg="CreateContainer within sandbox \"efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253\"" Dec 13 14:24:57.915257 env[1731]: time="2024-12-13T14:24:57.915236365Z" level=info msg="StartContainer for \"db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49\"" Dec 13 14:24:57.918201 env[1731]: time="2024-12-13T14:24:57.918155906Z" level=info msg="CreateContainer within sandbox \"abe9e81570dfaf37ba189400b6d66ab6a3f91602b67a8e05d90762030868869b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"62ac3b5b0358cef17e958ff294e00894323b1bb66e244a501ade2ffd156c28c7\"" Dec 13 14:24:57.918822 env[1731]: time="2024-12-13T14:24:57.918784037Z" level=info msg="StartContainer for \"6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253\"" Dec 13 14:24:57.923040 env[1731]: time="2024-12-13T14:24:57.922999084Z" level=info msg="StartContainer for \"62ac3b5b0358cef17e958ff294e00894323b1bb66e244a501ade2ffd156c28c7\"" Dec 13 14:24:57.938368 kubelet[2302]: I1213 14:24:57.937659 2302 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:24:57.938368 kubelet[2302]: E1213 14:24:57.938152 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.15:6443/api/v1/nodes\": dial tcp 172.31.21.15:6443: connect: connection refused" node="ip-172-31-21-15" Dec 13 14:24:57.949560 systemd[1]: Started cri-containerd-db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49.scope. Dec 13 14:24:57.972359 systemd[1]: Started cri-containerd-6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253.scope. Dec 13 14:24:58.000815 systemd[1]: Started cri-containerd-62ac3b5b0358cef17e958ff294e00894323b1bb66e244a501ade2ffd156c28c7.scope. Dec 13 14:24:58.082723 env[1731]: time="2024-12-13T14:24:58.082612426Z" level=info msg="StartContainer for \"db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49\" returns successfully" Dec 13 14:24:58.102527 env[1731]: time="2024-12-13T14:24:58.101600887Z" level=info msg="StartContainer for \"6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253\" returns successfully" Dec 13 14:24:58.115707 env[1731]: time="2024-12-13T14:24:58.115656828Z" level=info msg="StartContainer for \"62ac3b5b0358cef17e958ff294e00894323b1bb66e244a501ade2ffd156c28c7\" returns successfully" Dec 13 14:24:58.487730 kubelet[2302]: E1213 14:24:58.486187 2302 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:59.153195 kubelet[2302]: W1213 14:24:59.153154 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-15&limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:59.153807 kubelet[2302]: E1213 14:24:59.153788 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-15&limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:24:59.437122 kubelet[2302]: E1213 14:24:59.437015 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-15?timeout=10s\": dial tcp 172.31.21.15:6443: connect: connection refused" interval="3.2s" Dec 13 14:24:59.541780 kubelet[2302]: I1213 14:24:59.541745 2302 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:24:59.542181 kubelet[2302]: E1213 14:24:59.542156 2302 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.15:6443/api/v1/nodes\": dial tcp 172.31.21.15:6443: connect: connection refused" node="ip-172-31-21-15" Dec 13 14:25:00.027751 kubelet[2302]: W1213 14:25:00.027703 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:25:00.029375 kubelet[2302]: E1213 14:25:00.029309 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:25:00.520413 kubelet[2302]: W1213 14:25:00.520080 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:25:00.520413 kubelet[2302]: E1213 14:25:00.520419 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:25:00.539480 kubelet[2302]: W1213 14:25:00.539393 2302 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:25:00.539480 kubelet[2302]: E1213 14:25:00.539477 2302 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.15:6443: connect: connection refused Dec 13 14:25:02.747047 kubelet[2302]: I1213 14:25:02.747019 2302 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:25:03.938496 kubelet[2302]: E1213 14:25:03.938441 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-15\" not found" node="ip-172-31-21-15" Dec 13 14:25:04.033505 kubelet[2302]: I1213 14:25:04.033468 2302 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-15" Dec 13 14:25:04.090234 kubelet[2302]: E1213 14:25:04.090203 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.094118 kubelet[2302]: E1213 14:25:04.094089 2302 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-15.1810c2ab0ac573bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-15,UID:ip-172-31-21-15,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-15,},FirstTimestamp:2024-12-13 14:24:56.4135495 +0000 UTC m=+0.885341658,LastTimestamp:2024-12-13 14:24:56.4135495 +0000 UTC m=+0.885341658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-15,}" Dec 13 14:25:04.191508 kubelet[2302]: E1213 14:25:04.191356 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.292513 kubelet[2302]: E1213 14:25:04.292480 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.392798 kubelet[2302]: E1213 14:25:04.392759 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.493774 kubelet[2302]: E1213 14:25:04.493318 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.566690 amazon-ssm-agent[1726]: 2024-12-13 14:25:04 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:25:04.596384 kubelet[2302]: E1213 14:25:04.596289 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.697436 kubelet[2302]: E1213 14:25:04.697384 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.798202 kubelet[2302]: E1213 14:25:04.798166 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:04.902108 kubelet[2302]: E1213 14:25:04.902069 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:05.002761 kubelet[2302]: E1213 14:25:05.002720 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:05.103641 kubelet[2302]: E1213 14:25:05.103527 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:05.204339 kubelet[2302]: E1213 14:25:05.204300 2302 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-15\" not found" Dec 13 14:25:05.408781 kubelet[2302]: I1213 14:25:05.408664 2302 apiserver.go:52] "Watching apiserver" Dec 13 14:25:05.429037 kubelet[2302]: I1213 14:25:05.428976 2302 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:25:07.296600 systemd[1]: Reloading. Dec 13 14:25:07.432933 /usr/lib/systemd/system-generators/torcx-generator[2594]: time="2024-12-13T14:25:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:07.433062 /usr/lib/systemd/system-generators/torcx-generator[2594]: time="2024-12-13T14:25:07Z" level=info msg="torcx already run" Dec 13 14:25:07.627063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:07.627087 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:07.654808 kubelet[2302]: I1213 14:25:07.654778 2302 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-15" podStartSLOduration=1.654699922 podStartE2EDuration="1.654699922s" podCreationTimestamp="2024-12-13 14:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:06.561106858 +0000 UTC m=+11.032899042" watchObservedRunningTime="2024-12-13 14:25:07.654699922 +0000 UTC m=+12.126492078" Dec 13 14:25:07.680155 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:08.188879 kubelet[2302]: I1213 14:25:08.188774 2302 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:25:08.189101 systemd[1]: Stopping kubelet.service... Dec 13 14:25:08.210089 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:25:08.210418 systemd[1]: Stopped kubelet.service. Dec 13 14:25:08.210697 systemd[1]: kubelet.service: Consumed 1.347s CPU time. Dec 13 14:25:08.214246 systemd[1]: Starting kubelet.service... Dec 13 14:25:09.129483 update_engine[1723]: I1213 14:25:09.128607 1723 update_attempter.cc:509] Updating boot flags... Dec 13 14:25:09.410898 systemd[1]: Started kubelet.service. Dec 13 14:25:09.590771 kubelet[2748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:09.590771 kubelet[2748]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:25:09.590771 kubelet[2748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:09.591449 kubelet[2748]: I1213 14:25:09.590863 2748 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:25:09.625129 kubelet[2748]: I1213 14:25:09.625091 2748 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:25:09.625341 kubelet[2748]: I1213 14:25:09.625328 2748 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:25:09.625731 kubelet[2748]: I1213 14:25:09.625712 2748 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:25:09.630656 kubelet[2748]: I1213 14:25:09.630623 2748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:25:09.643129 kubelet[2748]: I1213 14:25:09.643096 2748 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:25:09.652032 kubelet[2748]: I1213 14:25:09.652000 2748 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:25:09.652511 kubelet[2748]: I1213 14:25:09.652493 2748 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:25:09.652899 kubelet[2748]: I1213 14:25:09.652876 2748 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:25:09.653057 kubelet[2748]: I1213 14:25:09.652974 2748 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:25:09.653057 kubelet[2748]: I1213 14:25:09.652995 2748 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:25:09.653057 kubelet[2748]: I1213 14:25:09.653036 2748 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:09.654679 kubelet[2748]: I1213 14:25:09.654054 2748 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:25:09.654679 kubelet[2748]: I1213 14:25:09.654083 2748 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:25:09.654679 kubelet[2748]: I1213 14:25:09.654116 2748 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:25:09.654679 kubelet[2748]: I1213 14:25:09.654135 2748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:25:09.666710 kubelet[2748]: I1213 14:25:09.663346 2748 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:25:09.666710 kubelet[2748]: I1213 14:25:09.663627 2748 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:25:09.666710 kubelet[2748]: I1213 14:25:09.664071 2748 server.go:1256] "Started kubelet" Dec 13 14:25:09.669415 kubelet[2748]: I1213 14:25:09.668480 2748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:25:09.671201 kubelet[2748]: I1213 14:25:09.671172 2748 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:25:09.674694 kubelet[2748]: I1213 14:25:09.674671 2748 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:25:09.682495 sudo[2762]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:25:09.682917 sudo[2762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:25:09.698742 kubelet[2748]: I1213 14:25:09.698713 2748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:25:09.698989 kubelet[2748]: I1213 14:25:09.698965 2748 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:25:09.701420 kubelet[2748]: I1213 14:25:09.701392 2748 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:25:09.716423 kubelet[2748]: I1213 14:25:09.716380 2748 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:25:09.717013 kubelet[2748]: I1213 14:25:09.716977 2748 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:25:09.744101 kubelet[2748]: I1213 14:25:09.741791 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:25:09.750146 kubelet[2748]: E1213 14:25:09.750103 2748 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:25:09.750579 kubelet[2748]: I1213 14:25:09.750558 2748 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:25:09.750579 kubelet[2748]: I1213 14:25:09.750579 2748 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:25:09.750768 kubelet[2748]: I1213 14:25:09.750736 2748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:25:09.789294 kubelet[2748]: I1213 14:25:09.789260 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:25:09.789294 kubelet[2748]: I1213 14:25:09.789303 2748 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:25:09.789522 kubelet[2748]: I1213 14:25:09.789325 2748 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:25:09.789522 kubelet[2748]: E1213 14:25:09.789389 2748 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:25:09.814108 kubelet[2748]: I1213 14:25:09.814078 2748 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-15" Dec 13 14:25:09.837631 kubelet[2748]: I1213 14:25:09.837586 2748 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-15" Dec 13 14:25:09.838856 kubelet[2748]: I1213 14:25:09.838822 2748 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-15" Dec 13 14:25:09.889576 kubelet[2748]: E1213 14:25:09.889536 2748 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:25:09.891368 kubelet[2748]: I1213 14:25:09.891309 2748 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:25:09.891368 kubelet[2748]: I1213 14:25:09.891357 2748 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:25:09.891368 kubelet[2748]: I1213 14:25:09.891377 2748 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:09.891607 kubelet[2748]: I1213 14:25:09.891597 2748 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:25:09.891656 kubelet[2748]: I1213 14:25:09.891627 2748 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:25:09.891656 kubelet[2748]: I1213 14:25:09.891637 2748 policy_none.go:49] "None policy: Start" Dec 13 14:25:09.892445 kubelet[2748]: I1213 14:25:09.892410 2748 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:25:09.892577 kubelet[2748]: I1213 14:25:09.892488 2748 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:25:09.893583 kubelet[2748]: I1213 14:25:09.892750 2748 state_mem.go:75] "Updated machine memory state" Dec 13 14:25:09.900698 kubelet[2748]: I1213 14:25:09.900654 2748 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:25:09.902215 kubelet[2748]: I1213 14:25:09.902182 2748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:25:10.090713 kubelet[2748]: I1213 14:25:10.090662 2748 topology_manager.go:215] "Topology Admit Handler" podUID="313d9c80a907aceb685b4c2c373e6585" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-15" Dec 13 14:25:10.090941 kubelet[2748]: I1213 14:25:10.090791 2748 topology_manager.go:215] "Topology Admit Handler" podUID="8c8cb32959fe7bd73cf4bd7b22a432ae" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.090941 kubelet[2748]: I1213 14:25:10.090845 2748 topology_manager.go:215] "Topology Admit Handler" podUID="f13db0b255eb7d0314af776124c7c781" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-15" Dec 13 14:25:10.102974 kubelet[2748]: E1213 14:25:10.102908 2748 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-15\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:25:10.105135 kubelet[2748]: E1213 14:25:10.105098 2748 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-15\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.133779 kubelet[2748]: I1213 14:25:10.133610 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/313d9c80a907aceb685b4c2c373e6585-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-15\" (UID: \"313d9c80a907aceb685b4c2c373e6585\") " pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:25:10.133961 kubelet[2748]: I1213 14:25:10.133782 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/313d9c80a907aceb685b4c2c373e6585-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-15\" (UID: \"313d9c80a907aceb685b4c2c373e6585\") " pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:25:10.133961 kubelet[2748]: I1213 14:25:10.133828 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.133961 kubelet[2748]: I1213 14:25:10.133854 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13db0b255eb7d0314af776124c7c781-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-15\" (UID: \"f13db0b255eb7d0314af776124c7c781\") " pod="kube-system/kube-scheduler-ip-172-31-21-15" Dec 13 14:25:10.133961 kubelet[2748]: I1213 14:25:10.133881 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/313d9c80a907aceb685b4c2c373e6585-ca-certs\") pod \"kube-apiserver-ip-172-31-21-15\" (UID: \"313d9c80a907aceb685b4c2c373e6585\") " pod="kube-system/kube-apiserver-ip-172-31-21-15" Dec 13 14:25:10.133961 kubelet[2748]: I1213 14:25:10.133912 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.134189 kubelet[2748]: I1213 14:25:10.133939 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.134189 kubelet[2748]: I1213 14:25:10.133968 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.134189 kubelet[2748]: I1213 14:25:10.134004 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c8cb32959fe7bd73cf4bd7b22a432ae-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-15\" (UID: \"8c8cb32959fe7bd73cf4bd7b22a432ae\") " pod="kube-system/kube-controller-manager-ip-172-31-21-15" Dec 13 14:25:10.660408 kubelet[2748]: I1213 14:25:10.660272 2748 apiserver.go:52] "Watching apiserver" Dec 13 14:25:10.717707 kubelet[2748]: I1213 14:25:10.717653 2748 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:25:10.879385 kubelet[2748]: I1213 14:25:10.879344 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-15" podStartSLOduration=0.879249021 podStartE2EDuration="879.249021ms" podCreationTimestamp="2024-12-13 14:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:10.877015212 +0000 UTC m=+1.424357789" watchObservedRunningTime="2024-12-13 14:25:10.879249021 +0000 UTC m=+1.426591578" Dec 13 14:25:10.937636 kubelet[2748]: I1213 14:25:10.937424 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-15" podStartSLOduration=3.937169977 podStartE2EDuration="3.937169977s" podCreationTimestamp="2024-12-13 14:25:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:10.915909558 +0000 UTC m=+1.463252133" watchObservedRunningTime="2024-12-13 14:25:10.937169977 +0000 UTC m=+1.484512553" Dec 13 14:25:11.501446 sudo[2762]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:13.878414 sudo[1968]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:13.901909 sshd[1965]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:13.906777 systemd[1]: sshd@4-172.31.21.15:22-139.178.89.65:60956.service: Deactivated successfully. Dec 13 14:25:13.908219 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:25:13.908634 systemd[1]: session-5.scope: Consumed 4.930s CPU time. Dec 13 14:25:13.909840 systemd-logind[1722]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:25:13.911032 systemd-logind[1722]: Removed session 5. Dec 13 14:25:19.860999 kubelet[2748]: I1213 14:25:19.860962 2748 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:25:19.862837 env[1731]: time="2024-12-13T14:25:19.862732423Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:25:19.863746 kubelet[2748]: I1213 14:25:19.863714 2748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:25:20.359373 kubelet[2748]: I1213 14:25:20.359338 2748 topology_manager.go:215] "Topology Admit Handler" podUID="888c6e2e-b75c-4eed-81d8-e334405eafb0" podNamespace="kube-system" podName="kube-proxy-8vhqw" Dec 13 14:25:20.375025 systemd[1]: Created slice kubepods-besteffort-pod888c6e2e_b75c_4eed_81d8_e334405eafb0.slice. Dec 13 14:25:20.418005 kubelet[2748]: I1213 14:25:20.417959 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/888c6e2e-b75c-4eed-81d8-e334405eafb0-kube-proxy\") pod \"kube-proxy-8vhqw\" (UID: \"888c6e2e-b75c-4eed-81d8-e334405eafb0\") " pod="kube-system/kube-proxy-8vhqw" Dec 13 14:25:20.418702 kubelet[2748]: I1213 14:25:20.418023 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/888c6e2e-b75c-4eed-81d8-e334405eafb0-lib-modules\") pod \"kube-proxy-8vhqw\" (UID: \"888c6e2e-b75c-4eed-81d8-e334405eafb0\") " pod="kube-system/kube-proxy-8vhqw" Dec 13 14:25:20.418702 kubelet[2748]: I1213 14:25:20.418055 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqzx6\" (UniqueName: \"kubernetes.io/projected/888c6e2e-b75c-4eed-81d8-e334405eafb0-kube-api-access-vqzx6\") pod \"kube-proxy-8vhqw\" (UID: \"888c6e2e-b75c-4eed-81d8-e334405eafb0\") " pod="kube-system/kube-proxy-8vhqw" Dec 13 14:25:20.418702 kubelet[2748]: I1213 14:25:20.418081 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/888c6e2e-b75c-4eed-81d8-e334405eafb0-xtables-lock\") pod \"kube-proxy-8vhqw\" (UID: \"888c6e2e-b75c-4eed-81d8-e334405eafb0\") " pod="kube-system/kube-proxy-8vhqw" Dec 13 14:25:20.420699 kubelet[2748]: I1213 14:25:20.420665 2748 topology_manager.go:215] "Topology Admit Handler" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" podNamespace="kube-system" podName="cilium-w9qhc" Dec 13 14:25:20.428870 systemd[1]: Created slice kubepods-burstable-podf531703e_ee31_46ae_b8c2_67ff72a8ab44.slice. Dec 13 14:25:20.519593 kubelet[2748]: I1213 14:25:20.519382 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-run\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.519881 kubelet[2748]: I1213 14:25:20.519726 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hostproc\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.520106 kubelet[2748]: I1213 14:25:20.519897 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6thw\" (UniqueName: \"kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-kube-api-access-d6thw\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.520250 kubelet[2748]: I1213 14:25:20.520233 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f531703e-ee31-46ae-b8c2-67ff72a8ab44-clustermesh-secrets\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.520824 kubelet[2748]: I1213 14:25:20.520796 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-bpf-maps\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.520989 kubelet[2748]: I1213 14:25:20.520972 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hubble-tls\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521054 kubelet[2748]: I1213 14:25:20.521045 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-cgroup\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521111 kubelet[2748]: I1213 14:25:20.521093 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-etc-cni-netd\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521160 kubelet[2748]: I1213 14:25:20.521135 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-config-path\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521382 kubelet[2748]: I1213 14:25:20.521364 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-lib-modules\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521447 kubelet[2748]: I1213 14:25:20.521424 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-kernel\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521530 kubelet[2748]: I1213 14:25:20.521517 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cni-path\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521588 kubelet[2748]: I1213 14:25:20.521570 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-xtables-lock\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.521635 kubelet[2748]: I1213 14:25:20.521605 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-net\") pod \"cilium-w9qhc\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " pod="kube-system/cilium-w9qhc" Dec 13 14:25:20.601053 kubelet[2748]: I1213 14:25:20.601016 2748 topology_manager.go:215] "Topology Admit Handler" podUID="6787b7f6-4e80-45ef-ad8c-902e1c3fed5a" podNamespace="kube-system" podName="cilium-operator-5cc964979-dzmpk" Dec 13 14:25:20.611908 systemd[1]: Created slice kubepods-besteffort-pod6787b7f6_4e80_45ef_ad8c_902e1c3fed5a.slice. Dec 13 14:25:20.692414 env[1731]: time="2024-12-13T14:25:20.691627358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vhqw,Uid:888c6e2e-b75c-4eed-81d8-e334405eafb0,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:20.731822 kubelet[2748]: I1213 14:25:20.731784 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-cilium-config-path\") pod \"cilium-operator-5cc964979-dzmpk\" (UID: \"6787b7f6-4e80-45ef-ad8c-902e1c3fed5a\") " pod="kube-system/cilium-operator-5cc964979-dzmpk" Dec 13 14:25:20.732102 kubelet[2748]: I1213 14:25:20.732088 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rjz\" (UniqueName: \"kubernetes.io/projected/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-kube-api-access-w5rjz\") pod \"cilium-operator-5cc964979-dzmpk\" (UID: \"6787b7f6-4e80-45ef-ad8c-902e1c3fed5a\") " pod="kube-system/cilium-operator-5cc964979-dzmpk" Dec 13 14:25:20.732928 env[1731]: time="2024-12-13T14:25:20.732882071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9qhc,Uid:f531703e-ee31-46ae-b8c2-67ff72a8ab44,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:20.747079 env[1731]: time="2024-12-13T14:25:20.746991258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:20.747079 env[1731]: time="2024-12-13T14:25:20.747052794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:20.747468 env[1731]: time="2024-12-13T14:25:20.747404455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:20.748548 env[1731]: time="2024-12-13T14:25:20.747742169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8318ccdfc8a41035481ec22cb2a121d8f679b125f5b6934dc8e9d3af8e721284 pid=2830 runtime=io.containerd.runc.v2 Dec 13 14:25:20.762283 env[1731]: time="2024-12-13T14:25:20.762175162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:20.762481 env[1731]: time="2024-12-13T14:25:20.762310770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:20.762481 env[1731]: time="2024-12-13T14:25:20.762342295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:20.762702 env[1731]: time="2024-12-13T14:25:20.762650990Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba pid=2846 runtime=io.containerd.runc.v2 Dec 13 14:25:20.781848 systemd[1]: Started cri-containerd-2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba.scope. Dec 13 14:25:20.816518 systemd[1]: Started cri-containerd-8318ccdfc8a41035481ec22cb2a121d8f679b125f5b6934dc8e9d3af8e721284.scope. Dec 13 14:25:20.883978 env[1731]: time="2024-12-13T14:25:20.882132905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9qhc,Uid:f531703e-ee31-46ae-b8c2-67ff72a8ab44,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\"" Dec 13 14:25:20.890491 env[1731]: time="2024-12-13T14:25:20.890150757Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:25:20.908502 env[1731]: time="2024-12-13T14:25:20.907522541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vhqw,Uid:888c6e2e-b75c-4eed-81d8-e334405eafb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8318ccdfc8a41035481ec22cb2a121d8f679b125f5b6934dc8e9d3af8e721284\"" Dec 13 14:25:20.919073 env[1731]: time="2024-12-13T14:25:20.919020065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dzmpk,Uid:6787b7f6-4e80-45ef-ad8c-902e1c3fed5a,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:20.926973 env[1731]: time="2024-12-13T14:25:20.926928037Z" level=info msg="CreateContainer within sandbox \"8318ccdfc8a41035481ec22cb2a121d8f679b125f5b6934dc8e9d3af8e721284\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:25:20.975398 env[1731]: time="2024-12-13T14:25:20.975310960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:20.975827 env[1731]: time="2024-12-13T14:25:20.975357833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:20.975827 env[1731]: time="2024-12-13T14:25:20.975373525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:20.976062 env[1731]: time="2024-12-13T14:25:20.975879630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45 pid=2916 runtime=io.containerd.runc.v2 Dec 13 14:25:20.996264 systemd[1]: Started cri-containerd-1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45.scope. Dec 13 14:25:21.016311 env[1731]: time="2024-12-13T14:25:21.016243335Z" level=info msg="CreateContainer within sandbox \"8318ccdfc8a41035481ec22cb2a121d8f679b125f5b6934dc8e9d3af8e721284\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7051d048d0f6c6f899a2ef44db528223850575d010b2272692359678d6d4c493\"" Dec 13 14:25:21.018225 env[1731]: time="2024-12-13T14:25:21.018187823Z" level=info msg="StartContainer for \"7051d048d0f6c6f899a2ef44db528223850575d010b2272692359678d6d4c493\"" Dec 13 14:25:21.067007 systemd[1]: Started cri-containerd-7051d048d0f6c6f899a2ef44db528223850575d010b2272692359678d6d4c493.scope. Dec 13 14:25:21.103837 env[1731]: time="2024-12-13T14:25:21.103796625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-dzmpk,Uid:6787b7f6-4e80-45ef-ad8c-902e1c3fed5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\"" Dec 13 14:25:21.154368 env[1731]: time="2024-12-13T14:25:21.154261460Z" level=info msg="StartContainer for \"7051d048d0f6c6f899a2ef44db528223850575d010b2272692359678d6d4c493\" returns successfully" Dec 13 14:25:21.919337 kubelet[2748]: I1213 14:25:21.919295 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8vhqw" podStartSLOduration=1.919227944 podStartE2EDuration="1.919227944s" podCreationTimestamp="2024-12-13 14:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:21.918580799 +0000 UTC m=+12.465923375" watchObservedRunningTime="2024-12-13 14:25:21.919227944 +0000 UTC m=+12.466570523" Dec 13 14:25:28.752805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927260879.mount: Deactivated successfully. Dec 13 14:25:33.132032 env[1731]: time="2024-12-13T14:25:33.131915929Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.135089 env[1731]: time="2024-12-13T14:25:33.135045315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.137597 env[1731]: time="2024-12-13T14:25:33.137523304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:33.138991 env[1731]: time="2024-12-13T14:25:33.138950145Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:25:33.140618 env[1731]: time="2024-12-13T14:25:33.140566327Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:25:33.145059 env[1731]: time="2024-12-13T14:25:33.145012189Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:25:33.170405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775932318.mount: Deactivated successfully. Dec 13 14:25:33.211055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251650136.mount: Deactivated successfully. Dec 13 14:25:33.211482 env[1731]: time="2024-12-13T14:25:33.211135779Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\"" Dec 13 14:25:33.222696 env[1731]: time="2024-12-13T14:25:33.221315950Z" level=info msg="StartContainer for \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\"" Dec 13 14:25:33.275536 systemd[1]: Started cri-containerd-cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44.scope. Dec 13 14:25:33.361529 env[1731]: time="2024-12-13T14:25:33.361476329Z" level=info msg="StartContainer for \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\" returns successfully" Dec 13 14:25:33.367734 systemd[1]: cri-containerd-cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44.scope: Deactivated successfully. Dec 13 14:25:33.435300 env[1731]: time="2024-12-13T14:25:33.435168440Z" level=info msg="shim disconnected" id=cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44 Dec 13 14:25:33.435300 env[1731]: time="2024-12-13T14:25:33.435216251Z" level=warning msg="cleaning up after shim disconnected" id=cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44 namespace=k8s.io Dec 13 14:25:33.435300 env[1731]: time="2024-12-13T14:25:33.435228772Z" level=info msg="cleaning up dead shim" Dec 13 14:25:33.448338 env[1731]: time="2024-12-13T14:25:33.448284883Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3153 runtime=io.containerd.runc.v2\n" Dec 13 14:25:33.995927 env[1731]: time="2024-12-13T14:25:33.995863934Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:25:34.041135 env[1731]: time="2024-12-13T14:25:34.041061442Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\"" Dec 13 14:25:34.045293 env[1731]: time="2024-12-13T14:25:34.045250599Z" level=info msg="StartContainer for \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\"" Dec 13 14:25:34.113423 systemd[1]: Started cri-containerd-0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50.scope. Dec 13 14:25:34.162081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44-rootfs.mount: Deactivated successfully. Dec 13 14:25:34.197145 env[1731]: time="2024-12-13T14:25:34.197056049Z" level=info msg="StartContainer for \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\" returns successfully" Dec 13 14:25:34.227042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:34.232380 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:34.232903 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:25:34.238193 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:34.274997 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:34.278073 systemd[1]: cri-containerd-0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50.scope: Deactivated successfully. Dec 13 14:25:34.338247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50-rootfs.mount: Deactivated successfully. Dec 13 14:25:34.364796 env[1731]: time="2024-12-13T14:25:34.364737030Z" level=info msg="shim disconnected" id=0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50 Dec 13 14:25:34.364796 env[1731]: time="2024-12-13T14:25:34.364794511Z" level=warning msg="cleaning up after shim disconnected" id=0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50 namespace=k8s.io Dec 13 14:25:34.365237 env[1731]: time="2024-12-13T14:25:34.364806670Z" level=info msg="cleaning up dead shim" Dec 13 14:25:34.374695 env[1731]: time="2024-12-13T14:25:34.374647136Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3217 runtime=io.containerd.runc.v2\n" Dec 13 14:25:35.019436 env[1731]: time="2024-12-13T14:25:35.019385207Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:25:35.073112 env[1731]: time="2024-12-13T14:25:35.073052040Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\"" Dec 13 14:25:35.079847 env[1731]: time="2024-12-13T14:25:35.079786143Z" level=info msg="StartContainer for \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\"" Dec 13 14:25:35.138507 systemd[1]: Started cri-containerd-9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945.scope. Dec 13 14:25:35.223426 env[1731]: time="2024-12-13T14:25:35.223373394Z" level=info msg="StartContainer for \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\" returns successfully" Dec 13 14:25:35.240556 systemd[1]: cri-containerd-9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945.scope: Deactivated successfully. Dec 13 14:25:35.284643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945-rootfs.mount: Deactivated successfully. Dec 13 14:25:35.299616 env[1731]: time="2024-12-13T14:25:35.299559934Z" level=info msg="shim disconnected" id=9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945 Dec 13 14:25:35.299616 env[1731]: time="2024-12-13T14:25:35.299613083Z" level=warning msg="cleaning up after shim disconnected" id=9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945 namespace=k8s.io Dec 13 14:25:35.300036 env[1731]: time="2024-12-13T14:25:35.299624649Z" level=info msg="cleaning up dead shim" Dec 13 14:25:35.310341 env[1731]: time="2024-12-13T14:25:35.310298127Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3277 runtime=io.containerd.runc.v2\n" Dec 13 14:25:36.018760 env[1731]: time="2024-12-13T14:25:36.018705814Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:25:36.047575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214599063.mount: Deactivated successfully. Dec 13 14:25:36.057103 env[1731]: time="2024-12-13T14:25:36.057051988Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\"" Dec 13 14:25:36.058532 env[1731]: time="2024-12-13T14:25:36.058496945Z" level=info msg="StartContainer for \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\"" Dec 13 14:25:36.078602 systemd[1]: Started cri-containerd-5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b.scope. Dec 13 14:25:36.109871 systemd[1]: cri-containerd-5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b.scope: Deactivated successfully. Dec 13 14:25:36.112101 env[1731]: time="2024-12-13T14:25:36.111771442Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf531703e_ee31_46ae_b8c2_67ff72a8ab44.slice/cri-containerd-5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b.scope/memory.events\": no such file or directory" Dec 13 14:25:36.117739 env[1731]: time="2024-12-13T14:25:36.117682469Z" level=info msg="StartContainer for \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\" returns successfully" Dec 13 14:25:36.183703 env[1731]: time="2024-12-13T14:25:36.183651208Z" level=info msg="shim disconnected" id=5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b Dec 13 14:25:36.183703 env[1731]: time="2024-12-13T14:25:36.183703737Z" level=warning msg="cleaning up after shim disconnected" id=5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b namespace=k8s.io Dec 13 14:25:36.184054 env[1731]: time="2024-12-13T14:25:36.183716213Z" level=info msg="cleaning up dead shim" Dec 13 14:25:36.192927 env[1731]: time="2024-12-13T14:25:36.192879753Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3334 runtime=io.containerd.runc.v2\n" Dec 13 14:25:37.035344 env[1731]: time="2024-12-13T14:25:37.035277718Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:25:37.075999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761952038.mount: Deactivated successfully. Dec 13 14:25:37.101138 env[1731]: time="2024-12-13T14:25:37.101084344Z" level=info msg="CreateContainer within sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\"" Dec 13 14:25:37.105154 env[1731]: time="2024-12-13T14:25:37.102102689Z" level=info msg="StartContainer for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\"" Dec 13 14:25:37.190004 systemd[1]: run-containerd-runc-k8s.io-486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae-runc.Db65Qb.mount: Deactivated successfully. Dec 13 14:25:37.224286 systemd[1]: Started cri-containerd-486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae.scope. Dec 13 14:25:37.280319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044641205.mount: Deactivated successfully. Dec 13 14:25:37.348254 env[1731]: time="2024-12-13T14:25:37.348116380Z" level=info msg="StartContainer for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" returns successfully" Dec 13 14:25:37.614833 kubelet[2748]: I1213 14:25:37.613357 2748 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:25:37.761863 kubelet[2748]: I1213 14:25:37.761818 2748 topology_manager.go:215] "Topology Admit Handler" podUID="cf1b1412-1eb1-4404-8463-f11d4987f414" podNamespace="kube-system" podName="coredns-76f75df574-xmxqs" Dec 13 14:25:37.772144 systemd[1]: Created slice kubepods-burstable-podcf1b1412_1eb1_4404_8463_f11d4987f414.slice. Dec 13 14:25:37.812841 kubelet[2748]: I1213 14:25:37.812808 2748 topology_manager.go:215] "Topology Admit Handler" podUID="33fe4330-8078-46dd-bc52-2df442badfb9" podNamespace="kube-system" podName="coredns-76f75df574-ng84r" Dec 13 14:25:37.821038 systemd[1]: Created slice kubepods-burstable-pod33fe4330_8078_46dd_bc52_2df442badfb9.slice. Dec 13 14:25:37.931760 kubelet[2748]: I1213 14:25:37.931295 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw4mp\" (UniqueName: \"kubernetes.io/projected/33fe4330-8078-46dd-bc52-2df442badfb9-kube-api-access-qw4mp\") pod \"coredns-76f75df574-ng84r\" (UID: \"33fe4330-8078-46dd-bc52-2df442badfb9\") " pod="kube-system/coredns-76f75df574-ng84r" Dec 13 14:25:37.931760 kubelet[2748]: I1213 14:25:37.931428 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf1b1412-1eb1-4404-8463-f11d4987f414-config-volume\") pod \"coredns-76f75df574-xmxqs\" (UID: \"cf1b1412-1eb1-4404-8463-f11d4987f414\") " pod="kube-system/coredns-76f75df574-xmxqs" Dec 13 14:25:37.931760 kubelet[2748]: I1213 14:25:37.931496 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33fe4330-8078-46dd-bc52-2df442badfb9-config-volume\") pod \"coredns-76f75df574-ng84r\" (UID: \"33fe4330-8078-46dd-bc52-2df442badfb9\") " pod="kube-system/coredns-76f75df574-ng84r" Dec 13 14:25:37.931760 kubelet[2748]: I1213 14:25:37.931590 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d85q\" (UniqueName: \"kubernetes.io/projected/cf1b1412-1eb1-4404-8463-f11d4987f414-kube-api-access-4d85q\") pod \"coredns-76f75df574-xmxqs\" (UID: \"cf1b1412-1eb1-4404-8463-f11d4987f414\") " pod="kube-system/coredns-76f75df574-xmxqs" Dec 13 14:25:38.078606 kubelet[2748]: I1213 14:25:38.078575 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w9qhc" podStartSLOduration=5.82786952 podStartE2EDuration="18.078523664s" podCreationTimestamp="2024-12-13 14:25:20 +0000 UTC" firstStartedPulling="2024-12-13 14:25:20.888978747 +0000 UTC m=+11.436321302" lastFinishedPulling="2024-12-13 14:25:33.139632874 +0000 UTC m=+23.686975446" observedRunningTime="2024-12-13 14:25:38.078069869 +0000 UTC m=+28.625412467" watchObservedRunningTime="2024-12-13 14:25:38.078523664 +0000 UTC m=+28.625866266" Dec 13 14:25:38.083749 env[1731]: time="2024-12-13T14:25:38.083701906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xmxqs,Uid:cf1b1412-1eb1-4404-8463-f11d4987f414,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:38.128238 env[1731]: time="2024-12-13T14:25:38.128191317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ng84r,Uid:33fe4330-8078-46dd-bc52-2df442badfb9,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:38.945372 env[1731]: time="2024-12-13T14:25:38.945317832Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:38.949881 env[1731]: time="2024-12-13T14:25:38.949832337Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:38.953060 env[1731]: time="2024-12-13T14:25:38.953012612Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:38.953732 env[1731]: time="2024-12-13T14:25:38.953686881Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:25:38.958688 env[1731]: time="2024-12-13T14:25:38.958645672Z" level=info msg="CreateContainer within sandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:25:38.985596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1324692434.mount: Deactivated successfully. Dec 13 14:25:39.000464 env[1731]: time="2024-12-13T14:25:39.000404374Z" level=info msg="CreateContainer within sandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\"" Dec 13 14:25:39.002977 env[1731]: time="2024-12-13T14:25:39.002838429Z" level=info msg="StartContainer for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\"" Dec 13 14:25:39.043197 systemd[1]: Started cri-containerd-81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497.scope. Dec 13 14:25:39.091060 env[1731]: time="2024-12-13T14:25:39.090971388Z" level=info msg="StartContainer for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" returns successfully" Dec 13 14:25:43.190985 systemd-networkd[1464]: cilium_host: Link UP Dec 13 14:25:43.191133 systemd-networkd[1464]: cilium_net: Link UP Dec 13 14:25:43.191137 systemd-networkd[1464]: cilium_net: Gained carrier Dec 13 14:25:43.193316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:25:43.194142 systemd-networkd[1464]: cilium_host: Gained carrier Dec 13 14:25:43.200498 (udev-worker)[3532]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:43.200498 (udev-worker)[3531]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:43.546142 systemd-networkd[1464]: cilium_net: Gained IPv6LL Dec 13 14:25:43.561626 systemd-networkd[1464]: cilium_host: Gained IPv6LL Dec 13 14:25:43.617352 (udev-worker)[3535]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:43.626304 systemd-networkd[1464]: cilium_vxlan: Link UP Dec 13 14:25:43.626313 systemd-networkd[1464]: cilium_vxlan: Gained carrier Dec 13 14:25:44.339488 kernel: NET: Registered PF_ALG protocol family Dec 13 14:25:45.646110 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL Dec 13 14:25:45.647897 systemd-networkd[1464]: lxc_health: Link UP Dec 13 14:25:45.793499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:25:45.793384 systemd-networkd[1464]: lxc_health: Gained carrier Dec 13 14:25:46.278997 (udev-worker)[3857]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:46.283873 (udev-worker)[3530]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:25:46.290573 systemd-networkd[1464]: lxcd21c23d81b1a: Link UP Dec 13 14:25:46.291913 systemd-networkd[1464]: lxc7955d76d3e35: Link UP Dec 13 14:25:46.307513 kernel: eth0: renamed from tmp17652 Dec 13 14:25:46.307713 kernel: eth0: renamed from tmpc96bb Dec 13 14:25:46.322932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd21c23d81b1a: link becomes ready Dec 13 14:25:46.323073 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7955d76d3e35: link becomes ready Dec 13 14:25:46.318745 systemd-networkd[1464]: lxcd21c23d81b1a: Gained carrier Dec 13 14:25:46.325719 systemd-networkd[1464]: lxc7955d76d3e35: Gained carrier Dec 13 14:25:46.770429 kubelet[2748]: I1213 14:25:46.770382 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-dzmpk" podStartSLOduration=8.921957466 podStartE2EDuration="26.770309966s" podCreationTimestamp="2024-12-13 14:25:20 +0000 UTC" firstStartedPulling="2024-12-13 14:25:21.105679643 +0000 UTC m=+11.653022202" lastFinishedPulling="2024-12-13 14:25:38.954032132 +0000 UTC m=+29.501374702" observedRunningTime="2024-12-13 14:25:40.158523057 +0000 UTC m=+30.705865634" watchObservedRunningTime="2024-12-13 14:25:46.770309966 +0000 UTC m=+37.317652555" Dec 13 14:25:47.112824 systemd-networkd[1464]: lxc_health: Gained IPv6LL Dec 13 14:25:47.880937 systemd-networkd[1464]: lxc7955d76d3e35: Gained IPv6LL Dec 13 14:25:48.008636 systemd-networkd[1464]: lxcd21c23d81b1a: Gained IPv6LL Dec 13 14:25:49.191558 kubelet[2748]: I1213 14:25:49.191525 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:25:52.465548 env[1731]: time="2024-12-13T14:25:52.465377015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:52.465548 env[1731]: time="2024-12-13T14:25:52.465444085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:52.465548 env[1731]: time="2024-12-13T14:25:52.465472498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:52.466438 env[1731]: time="2024-12-13T14:25:52.466269819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c96bba8a139f65896d555cdae5ac4fde9923f10f009e5e93d4464ecec325ddcc pid=3906 runtime=io.containerd.runc.v2 Dec 13 14:25:52.504219 systemd[1]: Started cri-containerd-c96bba8a139f65896d555cdae5ac4fde9923f10f009e5e93d4464ecec325ddcc.scope. Dec 13 14:25:52.518179 systemd[1]: run-containerd-runc-k8s.io-c96bba8a139f65896d555cdae5ac4fde9923f10f009e5e93d4464ecec325ddcc-runc.tOhqxM.mount: Deactivated successfully. Dec 13 14:25:52.533882 env[1731]: time="2024-12-13T14:25:52.533806645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:52.534131 env[1731]: time="2024-12-13T14:25:52.534099557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:52.534264 env[1731]: time="2024-12-13T14:25:52.534239341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:52.534591 env[1731]: time="2024-12-13T14:25:52.534546887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/176526c9a7953c761ce692396a8080b05af48ab8719bf6fc0ac62e4cc5e8d9b0 pid=3934 runtime=io.containerd.runc.v2 Dec 13 14:25:52.560463 systemd[1]: Started cri-containerd-176526c9a7953c761ce692396a8080b05af48ab8719bf6fc0ac62e4cc5e8d9b0.scope. Dec 13 14:25:52.633080 env[1731]: time="2024-12-13T14:25:52.633031307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xmxqs,Uid:cf1b1412-1eb1-4404-8463-f11d4987f414,Namespace:kube-system,Attempt:0,} returns sandbox id \"c96bba8a139f65896d555cdae5ac4fde9923f10f009e5e93d4464ecec325ddcc\"" Dec 13 14:25:52.639185 env[1731]: time="2024-12-13T14:25:52.639142411Z" level=info msg="CreateContainer within sandbox \"c96bba8a139f65896d555cdae5ac4fde9923f10f009e5e93d4464ecec325ddcc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:25:52.682167 env[1731]: time="2024-12-13T14:25:52.682062714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ng84r,Uid:33fe4330-8078-46dd-bc52-2df442badfb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"176526c9a7953c761ce692396a8080b05af48ab8719bf6fc0ac62e4cc5e8d9b0\"" Dec 13 14:25:52.689508 env[1731]: time="2024-12-13T14:25:52.689447185Z" level=info msg="CreateContainer within sandbox \"176526c9a7953c761ce692396a8080b05af48ab8719bf6fc0ac62e4cc5e8d9b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:25:52.705243 env[1731]: time="2024-12-13T14:25:52.705179009Z" level=info msg="CreateContainer within sandbox \"c96bba8a139f65896d555cdae5ac4fde9923f10f009e5e93d4464ecec325ddcc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29923f998b50d99a91975fb48cbb67799fbe8c0691228190240d05529137886c\"" Dec 13 14:25:52.706331 env[1731]: time="2024-12-13T14:25:52.706289062Z" level=info msg="StartContainer for \"29923f998b50d99a91975fb48cbb67799fbe8c0691228190240d05529137886c\"" Dec 13 14:25:52.725155 env[1731]: time="2024-12-13T14:25:52.724938387Z" level=info msg="CreateContainer within sandbox \"176526c9a7953c761ce692396a8080b05af48ab8719bf6fc0ac62e4cc5e8d9b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d34f07fa944ba160027c6a18ee3782486c78d412d8f62093a9cda88f58f55306\"" Dec 13 14:25:52.726916 env[1731]: time="2024-12-13T14:25:52.726865799Z" level=info msg="StartContainer for \"d34f07fa944ba160027c6a18ee3782486c78d412d8f62093a9cda88f58f55306\"" Dec 13 14:25:52.752297 systemd[1]: Started cri-containerd-29923f998b50d99a91975fb48cbb67799fbe8c0691228190240d05529137886c.scope. Dec 13 14:25:52.783915 systemd[1]: Started cri-containerd-d34f07fa944ba160027c6a18ee3782486c78d412d8f62093a9cda88f58f55306.scope. Dec 13 14:25:52.833291 env[1731]: time="2024-12-13T14:25:52.832565675Z" level=info msg="StartContainer for \"29923f998b50d99a91975fb48cbb67799fbe8c0691228190240d05529137886c\" returns successfully" Dec 13 14:25:52.844215 env[1731]: time="2024-12-13T14:25:52.844160667Z" level=info msg="StartContainer for \"d34f07fa944ba160027c6a18ee3782486c78d412d8f62093a9cda88f58f55306\" returns successfully" Dec 13 14:25:53.123366 kubelet[2748]: I1213 14:25:53.123327 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ng84r" podStartSLOduration=33.123261166 podStartE2EDuration="33.123261166s" podCreationTimestamp="2024-12-13 14:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:53.122296151 +0000 UTC m=+43.669638721" watchObservedRunningTime="2024-12-13 14:25:53.123261166 +0000 UTC m=+43.670603742" Dec 13 14:25:53.138528 kubelet[2748]: I1213 14:25:53.138430 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xmxqs" podStartSLOduration=33.138354938 podStartE2EDuration="33.138354938s" podCreationTimestamp="2024-12-13 14:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:53.137216159 +0000 UTC m=+43.684558736" watchObservedRunningTime="2024-12-13 14:25:53.138354938 +0000 UTC m=+43.685697515" Dec 13 14:26:01.745250 systemd[1]: Started sshd@5-172.31.21.15:22-139.178.89.65:39022.service. Dec 13 14:26:02.295845 sshd[4069]: Accepted publickey for core from 139.178.89.65 port 39022 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:02.317002 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:02.374583 systemd-logind[1722]: New session 6 of user core. Dec 13 14:26:02.377949 systemd[1]: Started session-6.scope. Dec 13 14:26:03.524111 sshd[4069]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:03.531009 systemd-logind[1722]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:26:03.531352 systemd[1]: sshd@5-172.31.21.15:22-139.178.89.65:39022.service: Deactivated successfully. Dec 13 14:26:03.533940 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:26:03.540004 systemd-logind[1722]: Removed session 6. Dec 13 14:26:08.551958 systemd[1]: Started sshd@6-172.31.21.15:22-139.178.89.65:33880.service. Dec 13 14:26:08.724044 sshd[4082]: Accepted publickey for core from 139.178.89.65 port 33880 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:08.725706 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:08.747180 systemd-logind[1722]: New session 7 of user core. Dec 13 14:26:08.749366 systemd[1]: Started session-7.scope. Dec 13 14:26:09.046802 sshd[4082]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:09.050466 systemd[1]: sshd@6-172.31.21.15:22-139.178.89.65:33880.service: Deactivated successfully. Dec 13 14:26:09.051649 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:26:09.052562 systemd-logind[1722]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:26:09.053646 systemd-logind[1722]: Removed session 7. Dec 13 14:26:14.075365 systemd[1]: Started sshd@7-172.31.21.15:22-139.178.89.65:33884.service. Dec 13 14:26:14.241215 sshd[4098]: Accepted publickey for core from 139.178.89.65 port 33884 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:14.243172 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:14.248864 systemd[1]: Started session-8.scope. Dec 13 14:26:14.249765 systemd-logind[1722]: New session 8 of user core. Dec 13 14:26:14.476322 sshd[4098]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:14.485315 systemd[1]: sshd@7-172.31.21.15:22-139.178.89.65:33884.service: Deactivated successfully. Dec 13 14:26:14.486363 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:26:14.487121 systemd-logind[1722]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:26:14.488096 systemd-logind[1722]: Removed session 8. Dec 13 14:26:19.505635 systemd[1]: Started sshd@8-172.31.21.15:22-139.178.89.65:34304.service. Dec 13 14:26:19.676276 sshd[4111]: Accepted publickey for core from 139.178.89.65 port 34304 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:19.678006 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:19.685721 systemd-logind[1722]: New session 9 of user core. Dec 13 14:26:19.686728 systemd[1]: Started session-9.scope. Dec 13 14:26:19.906300 sshd[4111]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:19.918028 systemd-logind[1722]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:26:19.918472 systemd[1]: sshd@8-172.31.21.15:22-139.178.89.65:34304.service: Deactivated successfully. Dec 13 14:26:19.919598 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:26:19.921390 systemd-logind[1722]: Removed session 9. Dec 13 14:26:24.932206 systemd[1]: Started sshd@9-172.31.21.15:22-139.178.89.65:34308.service. Dec 13 14:26:25.103946 sshd[4127]: Accepted publickey for core from 139.178.89.65 port 34308 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:25.105777 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:25.112650 systemd-logind[1722]: New session 10 of user core. Dec 13 14:26:25.113017 systemd[1]: Started session-10.scope. Dec 13 14:26:25.347989 sshd[4127]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:25.354298 systemd[1]: sshd@9-172.31.21.15:22-139.178.89.65:34308.service: Deactivated successfully. Dec 13 14:26:25.357014 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:26:25.358097 systemd-logind[1722]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:26:25.359886 systemd-logind[1722]: Removed session 10. Dec 13 14:26:30.374132 systemd[1]: Started sshd@10-172.31.21.15:22-139.178.89.65:50206.service. Dec 13 14:26:30.543100 sshd[4139]: Accepted publickey for core from 139.178.89.65 port 50206 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:30.544838 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:30.551166 systemd[1]: Started session-11.scope. Dec 13 14:26:30.552077 systemd-logind[1722]: New session 11 of user core. Dec 13 14:26:30.822909 sshd[4139]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:30.827423 systemd-logind[1722]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:26:30.827658 systemd[1]: sshd@10-172.31.21.15:22-139.178.89.65:50206.service: Deactivated successfully. Dec 13 14:26:30.828977 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:26:30.830415 systemd-logind[1722]: Removed session 11. Dec 13 14:26:35.852541 systemd[1]: Started sshd@11-172.31.21.15:22-139.178.89.65:50210.service. Dec 13 14:26:36.029490 sshd[4152]: Accepted publickey for core from 139.178.89.65 port 50210 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:36.031443 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:36.037575 systemd-logind[1722]: New session 12 of user core. Dec 13 14:26:36.037877 systemd[1]: Started session-12.scope. Dec 13 14:26:36.249755 sshd[4152]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:36.254708 systemd[1]: sshd@11-172.31.21.15:22-139.178.89.65:50210.service: Deactivated successfully. Dec 13 14:26:36.257013 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:26:36.258164 systemd-logind[1722]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:26:36.259374 systemd-logind[1722]: Removed session 12. Dec 13 14:26:36.275622 systemd[1]: Started sshd@12-172.31.21.15:22-139.178.89.65:50226.service. Dec 13 14:26:36.454850 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 50226 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:36.456788 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:36.468892 systemd-logind[1722]: New session 13 of user core. Dec 13 14:26:36.469324 systemd[1]: Started session-13.scope. Dec 13 14:26:36.768548 sshd[4164]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:36.773359 systemd[1]: sshd@12-172.31.21.15:22-139.178.89.65:50226.service: Deactivated successfully. Dec 13 14:26:36.774213 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:26:36.774960 systemd-logind[1722]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:26:36.776418 systemd-logind[1722]: Removed session 13. Dec 13 14:26:36.793995 systemd[1]: Started sshd@13-172.31.21.15:22-139.178.89.65:50240.service. Dec 13 14:26:36.976302 sshd[4174]: Accepted publickey for core from 139.178.89.65 port 50240 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:36.978088 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:36.983605 systemd-logind[1722]: New session 14 of user core. Dec 13 14:26:36.984886 systemd[1]: Started session-14.scope. Dec 13 14:26:37.211834 sshd[4174]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:37.215899 systemd[1]: sshd@13-172.31.21.15:22-139.178.89.65:50240.service: Deactivated successfully. Dec 13 14:26:37.217075 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:26:37.220243 systemd-logind[1722]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:26:37.222132 systemd-logind[1722]: Removed session 14. Dec 13 14:26:42.245560 systemd[1]: Started sshd@14-172.31.21.15:22-139.178.89.65:50214.service. Dec 13 14:26:42.444762 sshd[4188]: Accepted publickey for core from 139.178.89.65 port 50214 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:42.447001 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:42.458142 systemd[1]: Started session-15.scope. Dec 13 14:26:42.458690 systemd-logind[1722]: New session 15 of user core. Dec 13 14:26:42.734418 sshd[4188]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:42.738926 systemd[1]: sshd@14-172.31.21.15:22-139.178.89.65:50214.service: Deactivated successfully. Dec 13 14:26:42.740883 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:26:42.742586 systemd-logind[1722]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:26:42.746304 systemd-logind[1722]: Removed session 15. Dec 13 14:26:47.763840 systemd[1]: Started sshd@15-172.31.21.15:22-139.178.89.65:50230.service. Dec 13 14:26:47.948316 sshd[4200]: Accepted publickey for core from 139.178.89.65 port 50230 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:47.951572 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:47.959420 systemd[1]: Started session-16.scope. Dec 13 14:26:47.960402 systemd-logind[1722]: New session 16 of user core. Dec 13 14:26:48.190038 sshd[4200]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:48.194220 systemd-logind[1722]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:26:48.194808 systemd[1]: sshd@15-172.31.21.15:22-139.178.89.65:50230.service: Deactivated successfully. Dec 13 14:26:48.196245 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:26:48.198060 systemd-logind[1722]: Removed session 16. Dec 13 14:26:53.223180 systemd[1]: Started sshd@16-172.31.21.15:22-139.178.89.65:60688.service. Dec 13 14:26:53.402304 sshd[4213]: Accepted publickey for core from 139.178.89.65 port 60688 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:53.404015 sshd[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:53.410905 systemd[1]: Started session-17.scope. Dec 13 14:26:53.411569 systemd-logind[1722]: New session 17 of user core. Dec 13 14:26:53.656400 sshd[4213]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:53.667280 systemd[1]: sshd@16-172.31.21.15:22-139.178.89.65:60688.service: Deactivated successfully. Dec 13 14:26:53.668535 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:26:53.669498 systemd-logind[1722]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:26:53.670487 systemd-logind[1722]: Removed session 17. Dec 13 14:26:53.683696 systemd[1]: Started sshd@17-172.31.21.15:22-139.178.89.65:60698.service. Dec 13 14:26:53.848884 sshd[4224]: Accepted publickey for core from 139.178.89.65 port 60698 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:53.850435 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:53.856418 systemd[1]: Started session-18.scope. Dec 13 14:26:53.857379 systemd-logind[1722]: New session 18 of user core. Dec 13 14:26:54.597164 sshd[4224]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:54.601046 systemd[1]: sshd@17-172.31.21.15:22-139.178.89.65:60698.service: Deactivated successfully. Dec 13 14:26:54.602893 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:26:54.604234 systemd-logind[1722]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:26:54.605936 systemd-logind[1722]: Removed session 18. Dec 13 14:26:54.637291 systemd[1]: Started sshd@18-172.31.21.15:22-139.178.89.65:60702.service. Dec 13 14:26:54.821507 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 60702 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:54.822975 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:54.829642 systemd-logind[1722]: New session 19 of user core. Dec 13 14:26:54.829682 systemd[1]: Started session-19.scope. Dec 13 14:26:57.260961 sshd[4236]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:57.275694 systemd[1]: sshd@18-172.31.21.15:22-139.178.89.65:60702.service: Deactivated successfully. Dec 13 14:26:57.277547 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:26:57.277549 systemd-logind[1722]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:26:57.279271 systemd-logind[1722]: Removed session 19. Dec 13 14:26:57.289268 systemd[1]: Started sshd@19-172.31.21.15:22-139.178.89.65:60706.service. Dec 13 14:26:57.488145 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 60706 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:57.490097 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:57.498637 systemd-logind[1722]: New session 20 of user core. Dec 13 14:26:57.499589 systemd[1]: Started session-20.scope. Dec 13 14:26:58.075596 sshd[4253]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:58.080715 systemd[1]: sshd@19-172.31.21.15:22-139.178.89.65:60706.service: Deactivated successfully. Dec 13 14:26:58.081810 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:26:58.082575 systemd-logind[1722]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:26:58.084373 systemd-logind[1722]: Removed session 20. Dec 13 14:26:58.102846 systemd[1]: Started sshd@20-172.31.21.15:22-139.178.89.65:51950.service. Dec 13 14:26:58.271174 sshd[4262]: Accepted publickey for core from 139.178.89.65 port 51950 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:26:58.274782 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:58.281543 systemd-logind[1722]: New session 21 of user core. Dec 13 14:26:58.282026 systemd[1]: Started session-21.scope. Dec 13 14:26:58.505623 sshd[4262]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:58.522550 systemd[1]: sshd@20-172.31.21.15:22-139.178.89.65:51950.service: Deactivated successfully. Dec 13 14:26:58.524401 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:26:58.525748 systemd-logind[1722]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:26:58.529092 systemd-logind[1722]: Removed session 21. Dec 13 14:27:03.536754 systemd[1]: Started sshd@21-172.31.21.15:22-139.178.89.65:51960.service. Dec 13 14:27:03.728195 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 51960 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:03.738438 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:03.752131 systemd[1]: Started session-22.scope. Dec 13 14:27:03.754350 systemd-logind[1722]: New session 22 of user core. Dec 13 14:27:04.068823 sshd[4274]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:04.082615 systemd[1]: sshd@21-172.31.21.15:22-139.178.89.65:51960.service: Deactivated successfully. Dec 13 14:27:04.085770 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:27:04.087059 systemd-logind[1722]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:27:04.088922 systemd-logind[1722]: Removed session 22. Dec 13 14:27:09.097298 systemd[1]: Started sshd@22-172.31.21.15:22-139.178.89.65:53680.service. Dec 13 14:27:09.268291 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 53680 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:09.271344 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:09.278958 systemd[1]: Started session-23.scope. Dec 13 14:27:09.279681 systemd-logind[1722]: New session 23 of user core. Dec 13 14:27:09.473402 sshd[4289]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:09.478010 systemd[1]: sshd@22-172.31.21.15:22-139.178.89.65:53680.service: Deactivated successfully. Dec 13 14:27:09.478558 systemd-logind[1722]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:27:09.479181 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:27:09.480621 systemd-logind[1722]: Removed session 23. Dec 13 14:27:14.508796 systemd[1]: Started sshd@23-172.31.21.15:22-139.178.89.65:53682.service. Dec 13 14:27:14.682874 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 53682 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:14.684990 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:14.711503 systemd[1]: Started session-24.scope. Dec 13 14:27:14.712595 systemd-logind[1722]: New session 24 of user core. Dec 13 14:27:14.958628 sshd[4302]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:14.969095 systemd[1]: sshd@23-172.31.21.15:22-139.178.89.65:53682.service: Deactivated successfully. Dec 13 14:27:14.971004 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:27:14.972931 systemd-logind[1722]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:27:14.974357 systemd-logind[1722]: Removed session 24. Dec 13 14:27:20.003273 systemd[1]: Started sshd@24-172.31.21.15:22-139.178.89.65:36998.service. Dec 13 14:27:20.169329 sshd[4314]: Accepted publickey for core from 139.178.89.65 port 36998 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:20.171366 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:20.177409 systemd[1]: Started session-25.scope. Dec 13 14:27:20.178399 systemd-logind[1722]: New session 25 of user core. Dec 13 14:27:20.410509 sshd[4314]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:20.426111 systemd[1]: sshd@24-172.31.21.15:22-139.178.89.65:36998.service: Deactivated successfully. Dec 13 14:27:20.430043 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:27:20.438096 systemd-logind[1722]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:27:20.447418 systemd[1]: Started sshd@25-172.31.21.15:22-139.178.89.65:37012.service. Dec 13 14:27:20.449499 systemd-logind[1722]: Removed session 25. Dec 13 14:27:20.634822 sshd[4326]: Accepted publickey for core from 139.178.89.65 port 37012 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:20.637327 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:20.649801 systemd[1]: Started session-26.scope. Dec 13 14:27:20.650714 systemd-logind[1722]: New session 26 of user core. Dec 13 14:27:22.745942 env[1731]: time="2024-12-13T14:27:22.745855398Z" level=info msg="StopContainer for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" with timeout 30 (s)" Dec 13 14:27:22.748026 systemd[1]: run-containerd-runc-k8s.io-486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae-runc.o2SKWl.mount: Deactivated successfully. Dec 13 14:27:22.751169 env[1731]: time="2024-12-13T14:27:22.751120879Z" level=info msg="Stop container \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" with signal terminated" Dec 13 14:27:22.791933 systemd[1]: cri-containerd-81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497.scope: Deactivated successfully. Dec 13 14:27:22.838784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497-rootfs.mount: Deactivated successfully. Dec 13 14:27:22.859672 env[1731]: time="2024-12-13T14:27:22.859616479Z" level=info msg="shim disconnected" id=81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497 Dec 13 14:27:22.859672 env[1731]: time="2024-12-13T14:27:22.859672348Z" level=warning msg="cleaning up after shim disconnected" id=81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497 namespace=k8s.io Dec 13 14:27:22.860286 env[1731]: time="2024-12-13T14:27:22.859684807Z" level=info msg="cleaning up dead shim" Dec 13 14:27:22.871820 env[1731]: time="2024-12-13T14:27:22.871768654Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4372 runtime=io.containerd.runc.v2\n" Dec 13 14:27:22.875561 env[1731]: time="2024-12-13T14:27:22.875449152Z" level=info msg="StopContainer for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" returns successfully" Dec 13 14:27:22.876901 env[1731]: time="2024-12-13T14:27:22.876862144Z" level=info msg="StopPodSandbox for \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\"" Dec 13 14:27:22.877339 env[1731]: time="2024-12-13T14:27:22.877261438Z" level=info msg="Container to stop \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:22.881241 env[1731]: time="2024-12-13T14:27:22.881180916Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:22.885137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45-shm.mount: Deactivated successfully. Dec 13 14:27:22.896973 env[1731]: time="2024-12-13T14:27:22.896890587Z" level=info msg="StopContainer for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" with timeout 2 (s)" Dec 13 14:27:22.897450 env[1731]: time="2024-12-13T14:27:22.897413181Z" level=info msg="Stop container \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" with signal terminated" Dec 13 14:27:22.904355 systemd[1]: cri-containerd-1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45.scope: Deactivated successfully. Dec 13 14:27:22.923626 systemd-networkd[1464]: lxc_health: Link DOWN Dec 13 14:27:22.923635 systemd-networkd[1464]: lxc_health: Lost carrier Dec 13 14:27:23.070721 env[1731]: time="2024-12-13T14:27:23.070667751Z" level=info msg="shim disconnected" id=1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45 Dec 13 14:27:23.071519 env[1731]: time="2024-12-13T14:27:23.071493872Z" level=warning msg="cleaning up after shim disconnected" id=1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45 namespace=k8s.io Dec 13 14:27:23.071697 env[1731]: time="2024-12-13T14:27:23.071682771Z" level=info msg="cleaning up dead shim" Dec 13 14:27:23.074195 systemd[1]: cri-containerd-486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae.scope: Deactivated successfully. Dec 13 14:27:23.074532 systemd[1]: cri-containerd-486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae.scope: Consumed 8.657s CPU time. Dec 13 14:27:23.091956 env[1731]: time="2024-12-13T14:27:23.091892897Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4414 runtime=io.containerd.runc.v2\n" Dec 13 14:27:23.092325 env[1731]: time="2024-12-13T14:27:23.092289093Z" level=info msg="TearDown network for sandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" successfully" Dec 13 14:27:23.092428 env[1731]: time="2024-12-13T14:27:23.092325312Z" level=info msg="StopPodSandbox for \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" returns successfully" Dec 13 14:27:23.135771 env[1731]: time="2024-12-13T14:27:23.135713251Z" level=info msg="shim disconnected" id=486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae Dec 13 14:27:23.137342 env[1731]: time="2024-12-13T14:27:23.135778421Z" level=warning msg="cleaning up after shim disconnected" id=486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae namespace=k8s.io Dec 13 14:27:23.137342 env[1731]: time="2024-12-13T14:27:23.135794728Z" level=info msg="cleaning up dead shim" Dec 13 14:27:23.146732 env[1731]: time="2024-12-13T14:27:23.146626457Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Dec 13 14:27:23.150695 env[1731]: time="2024-12-13T14:27:23.150608142Z" level=info msg="StopContainer for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" returns successfully" Dec 13 14:27:23.151824 env[1731]: time="2024-12-13T14:27:23.151769124Z" level=info msg="StopPodSandbox for \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\"" Dec 13 14:27:23.152050 env[1731]: time="2024-12-13T14:27:23.152020822Z" level=info msg="Container to stop \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:23.152125 env[1731]: time="2024-12-13T14:27:23.152050733Z" level=info msg="Container to stop \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:23.152125 env[1731]: time="2024-12-13T14:27:23.152068987Z" level=info msg="Container to stop \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:23.152358 env[1731]: time="2024-12-13T14:27:23.152271791Z" level=info msg="Container to stop \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:23.152638 env[1731]: time="2024-12-13T14:27:23.152605607Z" level=info msg="Container to stop \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:23.163770 systemd[1]: cri-containerd-2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba.scope: Deactivated successfully. Dec 13 14:27:23.196187 env[1731]: time="2024-12-13T14:27:23.196116961Z" level=info msg="shim disconnected" id=2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba Dec 13 14:27:23.196187 env[1731]: time="2024-12-13T14:27:23.196177749Z" level=warning msg="cleaning up after shim disconnected" id=2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba namespace=k8s.io Dec 13 14:27:23.196187 env[1731]: time="2024-12-13T14:27:23.196191181Z" level=info msg="cleaning up dead shim" Dec 13 14:27:23.206736 env[1731]: time="2024-12-13T14:27:23.206678681Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4473 runtime=io.containerd.runc.v2\n" Dec 13 14:27:23.207604 env[1731]: time="2024-12-13T14:27:23.207564400Z" level=info msg="TearDown network for sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" successfully" Dec 13 14:27:23.207604 env[1731]: time="2024-12-13T14:27:23.207597386Z" level=info msg="StopPodSandbox for \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" returns successfully" Dec 13 14:27:23.247208 kubelet[2748]: I1213 14:27:23.247163 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-cilium-config-path\") pod \"6787b7f6-4e80-45ef-ad8c-902e1c3fed5a\" (UID: \"6787b7f6-4e80-45ef-ad8c-902e1c3fed5a\") " Dec 13 14:27:23.247821 kubelet[2748]: I1213 14:27:23.247220 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rjz\" (UniqueName: \"kubernetes.io/projected/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-kube-api-access-w5rjz\") pod \"6787b7f6-4e80-45ef-ad8c-902e1c3fed5a\" (UID: \"6787b7f6-4e80-45ef-ad8c-902e1c3fed5a\") " Dec 13 14:27:23.247821 kubelet[2748]: I1213 14:27:23.247250 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-bpf-maps\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.247821 kubelet[2748]: I1213 14:27:23.247275 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-xtables-lock\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.247821 kubelet[2748]: I1213 14:27:23.247306 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f531703e-ee31-46ae-b8c2-67ff72a8ab44-clustermesh-secrets\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.247821 kubelet[2748]: I1213 14:27:23.247335 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hostproc\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.247821 kubelet[2748]: I1213 14:27:23.247434 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-net\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.248031 kubelet[2748]: I1213 14:27:23.247479 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-cgroup\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.248031 kubelet[2748]: I1213 14:27:23.247506 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-etc-cni-netd\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.249537 kubelet[2748]: I1213 14:27:23.248160 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.251248 kubelet[2748]: I1213 14:27:23.248000 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.256868 kubelet[2748]: I1213 14:27:23.256822 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6787b7f6-4e80-45ef-ad8c-902e1c3fed5a" (UID: "6787b7f6-4e80-45ef-ad8c-902e1c3fed5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:27:23.266588 kubelet[2748]: I1213 14:27:23.266487 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hostproc" (OuterVolumeSpecName: "hostproc") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.266954 kubelet[2748]: I1213 14:27:23.266506 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.267056 kubelet[2748]: I1213 14:27:23.266529 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.267133 kubelet[2748]: I1213 14:27:23.266559 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.267904 kubelet[2748]: I1213 14:27:23.267880 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f531703e-ee31-46ae-b8c2-67ff72a8ab44-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:27:23.268915 kubelet[2748]: I1213 14:27:23.268879 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-kube-api-access-w5rjz" (OuterVolumeSpecName: "kube-api-access-w5rjz") pod "6787b7f6-4e80-45ef-ad8c-902e1c3fed5a" (UID: "6787b7f6-4e80-45ef-ad8c-902e1c3fed5a"). InnerVolumeSpecName "kube-api-access-w5rjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:23.348670 kubelet[2748]: I1213 14:27:23.348506 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hubble-tls\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.348670 kubelet[2748]: I1213 14:27:23.348606 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-config-path\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.348670 kubelet[2748]: I1213 14:27:23.348639 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-lib-modules\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.350634 kubelet[2748]: I1213 14:27:23.350588 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-kernel\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.350762 kubelet[2748]: I1213 14:27:23.350653 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-run\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.350762 kubelet[2748]: I1213 14:27:23.350677 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cni-path\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.350762 kubelet[2748]: I1213 14:27:23.350707 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6thw\" (UniqueName: \"kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-kube-api-access-d6thw\") pod \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\" (UID: \"f531703e-ee31-46ae-b8c2-67ff72a8ab44\") " Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350763 2748 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-bpf-maps\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350784 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-cilium-config-path\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350799 2748 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w5rjz\" (UniqueName: \"kubernetes.io/projected/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a-kube-api-access-w5rjz\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350817 2748 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-xtables-lock\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350835 2748 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f531703e-ee31-46ae-b8c2-67ff72a8ab44-clustermesh-secrets\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350852 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-cgroup\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350868 2748 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hostproc\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.350916 kubelet[2748]: I1213 14:27:23.350883 2748 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-net\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.351307 kubelet[2748]: I1213 14:27:23.350898 2748 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-etc-cni-netd\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.351718 kubelet[2748]: I1213 14:27:23.351691 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.351901 kubelet[2748]: I1213 14:27:23.351880 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.352223 kubelet[2748]: I1213 14:27:23.352200 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.352369 kubelet[2748]: I1213 14:27:23.352351 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cni-path" (OuterVolumeSpecName: "cni-path") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:23.356192 kubelet[2748]: I1213 14:27:23.356154 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:23.356417 kubelet[2748]: I1213 14:27:23.356392 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:27:23.358841 kubelet[2748]: I1213 14:27:23.358791 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-kube-api-access-d6thw" (OuterVolumeSpecName: "kube-api-access-d6thw") pod "f531703e-ee31-46ae-b8c2-67ff72a8ab44" (UID: "f531703e-ee31-46ae-b8c2-67ff72a8ab44"). InnerVolumeSpecName "kube-api-access-d6thw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:23.436854 kubelet[2748]: I1213 14:27:23.436811 2748 scope.go:117] "RemoveContainer" containerID="486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae" Dec 13 14:27:23.441314 systemd[1]: Removed slice kubepods-burstable-podf531703e_ee31_46ae_b8c2_67ff72a8ab44.slice. Dec 13 14:27:23.441443 systemd[1]: kubepods-burstable-podf531703e_ee31_46ae_b8c2_67ff72a8ab44.slice: Consumed 8.793s CPU time. Dec 13 14:27:23.444918 env[1731]: time="2024-12-13T14:27:23.444812370Z" level=info msg="RemoveContainer for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\"" Dec 13 14:27:23.451179 kubelet[2748]: I1213 14:27:23.451150 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-run\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.451632 kubelet[2748]: I1213 14:27:23.451365 2748 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d6thw\" (UniqueName: \"kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-kube-api-access-d6thw\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.451951 kubelet[2748]: I1213 14:27:23.451931 2748 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cni-path\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.452084 kubelet[2748]: I1213 14:27:23.452072 2748 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f531703e-ee31-46ae-b8c2-67ff72a8ab44-hubble-tls\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.452163 env[1731]: time="2024-12-13T14:27:23.452108669Z" level=info msg="RemoveContainer for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" returns successfully" Dec 13 14:27:23.452246 kubelet[2748]: I1213 14:27:23.452232 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f531703e-ee31-46ae-b8c2-67ff72a8ab44-cilium-config-path\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.452323 kubelet[2748]: I1213 14:27:23.452313 2748 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-lib-modules\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.452406 kubelet[2748]: I1213 14:27:23.452398 2748 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f531703e-ee31-46ae-b8c2-67ff72a8ab44-host-proc-sys-kernel\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:23.456785 kubelet[2748]: I1213 14:27:23.456642 2748 scope.go:117] "RemoveContainer" containerID="5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b" Dec 13 14:27:23.458091 systemd[1]: Removed slice kubepods-besteffort-pod6787b7f6_4e80_45ef_ad8c_902e1c3fed5a.slice. Dec 13 14:27:23.461576 env[1731]: time="2024-12-13T14:27:23.461505711Z" level=info msg="RemoveContainer for \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\"" Dec 13 14:27:23.468893 env[1731]: time="2024-12-13T14:27:23.468844031Z" level=info msg="RemoveContainer for \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\" returns successfully" Dec 13 14:27:23.469338 kubelet[2748]: I1213 14:27:23.469183 2748 scope.go:117] "RemoveContainer" containerID="9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945" Dec 13 14:27:23.470508 env[1731]: time="2024-12-13T14:27:23.470441159Z" level=info msg="RemoveContainer for \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\"" Dec 13 14:27:23.477234 env[1731]: time="2024-12-13T14:27:23.477179240Z" level=info msg="RemoveContainer for \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\" returns successfully" Dec 13 14:27:23.477518 kubelet[2748]: I1213 14:27:23.477493 2748 scope.go:117] "RemoveContainer" containerID="0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50" Dec 13 14:27:23.480867 env[1731]: time="2024-12-13T14:27:23.478782149Z" level=info msg="RemoveContainer for \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\"" Dec 13 14:27:23.488584 env[1731]: time="2024-12-13T14:27:23.488506402Z" level=info msg="RemoveContainer for \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\" returns successfully" Dec 13 14:27:23.490886 kubelet[2748]: I1213 14:27:23.490853 2748 scope.go:117] "RemoveContainer" containerID="cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44" Dec 13 14:27:23.496077 env[1731]: time="2024-12-13T14:27:23.496033404Z" level=info msg="RemoveContainer for \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\"" Dec 13 14:27:23.501737 env[1731]: time="2024-12-13T14:27:23.501690479Z" level=info msg="RemoveContainer for \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\" returns successfully" Dec 13 14:27:23.502070 kubelet[2748]: I1213 14:27:23.502034 2748 scope.go:117] "RemoveContainer" containerID="486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae" Dec 13 14:27:23.502495 env[1731]: time="2024-12-13T14:27:23.502385234Z" level=error msg="ContainerStatus for \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\": not found" Dec 13 14:27:23.506163 kubelet[2748]: E1213 14:27:23.506128 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\": not found" containerID="486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae" Dec 13 14:27:23.508379 kubelet[2748]: I1213 14:27:23.508339 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae"} err="failed to get container status \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae\": not found" Dec 13 14:27:23.508694 kubelet[2748]: I1213 14:27:23.508388 2748 scope.go:117] "RemoveContainer" containerID="5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b" Dec 13 14:27:23.508955 env[1731]: time="2024-12-13T14:27:23.508885015Z" level=error msg="ContainerStatus for \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\": not found" Dec 13 14:27:23.509132 kubelet[2748]: E1213 14:27:23.509109 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\": not found" containerID="5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b" Dec 13 14:27:23.509206 kubelet[2748]: I1213 14:27:23.509161 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b"} err="failed to get container status \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5219758149a0d0b161b28b8dd1486caa923024832593875c6e3594f454297c2b\": not found" Dec 13 14:27:23.509206 kubelet[2748]: I1213 14:27:23.509179 2748 scope.go:117] "RemoveContainer" containerID="9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945" Dec 13 14:27:23.509497 env[1731]: time="2024-12-13T14:27:23.509419764Z" level=error msg="ContainerStatus for \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\": not found" Dec 13 14:27:23.509635 kubelet[2748]: E1213 14:27:23.509615 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\": not found" containerID="9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945" Dec 13 14:27:23.509701 kubelet[2748]: I1213 14:27:23.509652 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945"} err="failed to get container status \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f7f6ddc3ff75c00575d8aff1b8452f0c268b6f450ddc695c94613f8c9e9d945\": not found" Dec 13 14:27:23.509701 kubelet[2748]: I1213 14:27:23.509666 2748 scope.go:117] "RemoveContainer" containerID="0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50" Dec 13 14:27:23.509904 env[1731]: time="2024-12-13T14:27:23.509851871Z" level=error msg="ContainerStatus for \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\": not found" Dec 13 14:27:23.510042 kubelet[2748]: E1213 14:27:23.510021 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\": not found" containerID="0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50" Dec 13 14:27:23.510117 kubelet[2748]: I1213 14:27:23.510056 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50"} err="failed to get container status \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\": rpc error: code = NotFound desc = an error occurred when try to find container \"0218173e679da0eef5602ce3a651377ef410deecc0fb8a483240a16ecc56ce50\": not found" Dec 13 14:27:23.510117 kubelet[2748]: I1213 14:27:23.510069 2748 scope.go:117] "RemoveContainer" containerID="cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44" Dec 13 14:27:23.510287 env[1731]: time="2024-12-13T14:27:23.510237250Z" level=error msg="ContainerStatus for \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\": not found" Dec 13 14:27:23.510410 kubelet[2748]: E1213 14:27:23.510390 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\": not found" containerID="cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44" Dec 13 14:27:23.510485 kubelet[2748]: I1213 14:27:23.510424 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44"} err="failed to get container status \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb28d2c015cbe35825d5edf1e9d9ee434eb2ed4f8bcf964b3d101d495b730d44\": not found" Dec 13 14:27:23.510485 kubelet[2748]: I1213 14:27:23.510440 2748 scope.go:117] "RemoveContainer" containerID="81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497" Dec 13 14:27:23.511772 env[1731]: time="2024-12-13T14:27:23.511740550Z" level=info msg="RemoveContainer for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\"" Dec 13 14:27:23.516884 env[1731]: time="2024-12-13T14:27:23.516839033Z" level=info msg="RemoveContainer for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" returns successfully" Dec 13 14:27:23.517099 kubelet[2748]: I1213 14:27:23.517079 2748 scope.go:117] "RemoveContainer" containerID="81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497" Dec 13 14:27:23.517762 kubelet[2748]: E1213 14:27:23.517627 2748 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\": not found" containerID="81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497" Dec 13 14:27:23.517762 kubelet[2748]: I1213 14:27:23.517668 2748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497"} err="failed to get container status \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\": rpc error: code = NotFound desc = an error occurred when try to find container \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\": not found" Dec 13 14:27:23.517930 env[1731]: time="2024-12-13T14:27:23.517435699Z" level=error msg="ContainerStatus for \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81afb4b8a84e8dd1d9777ac826337bc64162ea8dc165263ee5a1e5b848ba6497\": not found" Dec 13 14:27:23.731924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-486eaf436ee5cc083867961fbd709ba4902f19fd1a1b8a1d2d0cd0ca0f5e13ae-rootfs.mount: Deactivated successfully. Dec 13 14:27:23.732056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45-rootfs.mount: Deactivated successfully. Dec 13 14:27:23.732141 systemd[1]: var-lib-kubelet-pods-6787b7f6\x2d4e80\x2d45ef\x2dad8c\x2d902e1c3fed5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5rjz.mount: Deactivated successfully. Dec 13 14:27:23.732231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba-rootfs.mount: Deactivated successfully. Dec 13 14:27:23.732313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba-shm.mount: Deactivated successfully. Dec 13 14:27:23.732407 systemd[1]: var-lib-kubelet-pods-f531703e\x2dee31\x2d46ae\x2db8c2\x2d67ff72a8ab44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6thw.mount: Deactivated successfully. Dec 13 14:27:23.732504 systemd[1]: var-lib-kubelet-pods-f531703e\x2dee31\x2d46ae\x2db8c2\x2d67ff72a8ab44-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:27:23.732590 systemd[1]: var-lib-kubelet-pods-f531703e\x2dee31\x2d46ae\x2db8c2\x2d67ff72a8ab44-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:27:23.793037 kubelet[2748]: I1213 14:27:23.792994 2748 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6787b7f6-4e80-45ef-ad8c-902e1c3fed5a" path="/var/lib/kubelet/pods/6787b7f6-4e80-45ef-ad8c-902e1c3fed5a/volumes" Dec 13 14:27:23.793902 kubelet[2748]: I1213 14:27:23.793867 2748 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" path="/var/lib/kubelet/pods/f531703e-ee31-46ae-b8c2-67ff72a8ab44/volumes" Dec 13 14:27:24.520847 sshd[4326]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:24.526770 systemd[1]: sshd@25-172.31.21.15:22-139.178.89.65:37012.service: Deactivated successfully. Dec 13 14:27:24.528638 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:27:24.529703 systemd-logind[1722]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:27:24.531345 systemd-logind[1722]: Removed session 26. Dec 13 14:27:24.546790 systemd[1]: Started sshd@26-172.31.21.15:22-139.178.89.65:37024.service. Dec 13 14:27:24.743479 sshd[4495]: Accepted publickey for core from 139.178.89.65 port 37024 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:24.747555 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:24.770797 systemd-logind[1722]: New session 27 of user core. Dec 13 14:27:24.770841 systemd[1]: Started session-27.scope. Dec 13 14:27:24.969577 kubelet[2748]: E1213 14:27:24.969260 2748 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:27:25.566402 sshd[4495]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:25.572106 systemd-logind[1722]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:27:25.585566 systemd[1]: sshd@26-172.31.21.15:22-139.178.89.65:37024.service: Deactivated successfully. Dec 13 14:27:25.587753 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:27:25.598611 systemd-logind[1722]: Removed session 27. Dec 13 14:27:25.605442 systemd[1]: Started sshd@27-172.31.21.15:22-139.178.89.65:37040.service. Dec 13 14:27:25.646315 kubelet[2748]: I1213 14:27:25.646276 2748 topology_manager.go:215] "Topology Admit Handler" podUID="bf3e557e-261c-4251-8abd-5943e1ac02ca" podNamespace="kube-system" podName="cilium-6drls" Dec 13 14:27:25.650156 kubelet[2748]: E1213 14:27:25.650113 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" containerName="mount-cgroup" Dec 13 14:27:25.650356 kubelet[2748]: E1213 14:27:25.650342 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" containerName="clean-cilium-state" Dec 13 14:27:25.650440 kubelet[2748]: E1213 14:27:25.650431 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" containerName="apply-sysctl-overwrites" Dec 13 14:27:25.650535 kubelet[2748]: E1213 14:27:25.650524 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" containerName="mount-bpf-fs" Dec 13 14:27:25.650625 kubelet[2748]: E1213 14:27:25.650616 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" containerName="cilium-agent" Dec 13 14:27:25.650691 kubelet[2748]: E1213 14:27:25.650684 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6787b7f6-4e80-45ef-ad8c-902e1c3fed5a" containerName="cilium-operator" Dec 13 14:27:25.650825 kubelet[2748]: I1213 14:27:25.650815 2748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f531703e-ee31-46ae-b8c2-67ff72a8ab44" containerName="cilium-agent" Dec 13 14:27:25.651072 kubelet[2748]: I1213 14:27:25.651058 2748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6787b7f6-4e80-45ef-ad8c-902e1c3fed5a" containerName="cilium-operator" Dec 13 14:27:25.675748 kubelet[2748]: I1213 14:27:25.675571 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-lib-modules\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.675748 kubelet[2748]: I1213 14:27:25.675632 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-ipsec-secrets\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.675748 kubelet[2748]: I1213 14:27:25.675670 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmb7r\" (UniqueName: \"kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-kube-api-access-pmb7r\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.675748 kubelet[2748]: I1213 14:27:25.675721 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-run\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.675748 kubelet[2748]: I1213 14:27:25.675751 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-xtables-lock\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676082 kubelet[2748]: I1213 14:27:25.675779 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-clustermesh-secrets\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676082 kubelet[2748]: I1213 14:27:25.675812 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-net\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676082 kubelet[2748]: I1213 14:27:25.675838 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-hubble-tls\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676082 kubelet[2748]: I1213 14:27:25.675869 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-etc-cni-netd\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676082 kubelet[2748]: I1213 14:27:25.675904 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-bpf-maps\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676082 kubelet[2748]: I1213 14:27:25.675934 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-cgroup\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676656 kubelet[2748]: I1213 14:27:25.675982 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-hostproc\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676656 kubelet[2748]: I1213 14:27:25.676014 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cni-path\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676656 kubelet[2748]: I1213 14:27:25.676050 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-config-path\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.676656 kubelet[2748]: I1213 14:27:25.676083 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-kernel\") pod \"cilium-6drls\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " pod="kube-system/cilium-6drls" Dec 13 14:27:25.678301 systemd[1]: Created slice kubepods-burstable-podbf3e557e_261c_4251_8abd_5943e1ac02ca.slice. Dec 13 14:27:25.820420 sshd[4505]: Accepted publickey for core from 139.178.89.65 port 37040 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:25.835238 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:25.867834 systemd[1]: Started session-28.scope. Dec 13 14:27:25.868964 systemd-logind[1722]: New session 28 of user core. Dec 13 14:27:25.985207 env[1731]: time="2024-12-13T14:27:25.985149478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6drls,Uid:bf3e557e-261c-4251-8abd-5943e1ac02ca,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:26.020944 env[1731]: time="2024-12-13T14:27:26.020541409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:26.020944 env[1731]: time="2024-12-13T14:27:26.020634905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:26.020944 env[1731]: time="2024-12-13T14:27:26.020675286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:26.020944 env[1731]: time="2024-12-13T14:27:26.020854539Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317 pid=4526 runtime=io.containerd.runc.v2 Dec 13 14:27:26.065252 systemd[1]: Started cri-containerd-e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317.scope. Dec 13 14:27:26.132032 env[1731]: time="2024-12-13T14:27:26.131703568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6drls,Uid:bf3e557e-261c-4251-8abd-5943e1ac02ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\"" Dec 13 14:27:26.145076 env[1731]: time="2024-12-13T14:27:26.144984433Z" level=info msg="CreateContainer within sandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:26.183079 env[1731]: time="2024-12-13T14:27:26.183018795Z" level=info msg="CreateContainer within sandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\"" Dec 13 14:27:26.186856 env[1731]: time="2024-12-13T14:27:26.186805827Z" level=info msg="StartContainer for \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\"" Dec 13 14:27:26.246439 systemd[1]: Started cri-containerd-3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770.scope. Dec 13 14:27:26.277524 systemd[1]: cri-containerd-3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770.scope: Deactivated successfully. Dec 13 14:27:26.314273 env[1731]: time="2024-12-13T14:27:26.314188983Z" level=info msg="shim disconnected" id=3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770 Dec 13 14:27:26.314273 env[1731]: time="2024-12-13T14:27:26.314271561Z" level=warning msg="cleaning up after shim disconnected" id=3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770 namespace=k8s.io Dec 13 14:27:26.314273 env[1731]: time="2024-12-13T14:27:26.314285484Z" level=info msg="cleaning up dead shim" Dec 13 14:27:26.317917 sshd[4505]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:26.322696 systemd[1]: sshd@27-172.31.21.15:22-139.178.89.65:37040.service: Deactivated successfully. Dec 13 14:27:26.323507 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:27:26.324897 systemd-logind[1722]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:27:26.326522 systemd-logind[1722]: Removed session 28. Dec 13 14:27:26.352052 systemd[1]: Started sshd@28-172.31.21.15:22-139.178.89.65:37056.service. Dec 13 14:27:26.368030 env[1731]: time="2024-12-13T14:27:26.367858215Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4589 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:27:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:27:26.368850 env[1731]: time="2024-12-13T14:27:26.368722292Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Dec 13 14:27:26.372478 env[1731]: time="2024-12-13T14:27:26.370805147Z" level=error msg="Failed to pipe stderr of container \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\"" error="reading from a closed fifo" Dec 13 14:27:26.372478 env[1731]: time="2024-12-13T14:27:26.370884240Z" level=error msg="Failed to pipe stdout of container \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\"" error="reading from a closed fifo" Dec 13 14:27:26.374232 env[1731]: time="2024-12-13T14:27:26.374155560Z" level=error msg="StartContainer for \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:27:26.374711 kubelet[2748]: E1213 14:27:26.374554 2748 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770" Dec 13 14:27:26.385200 kubelet[2748]: E1213 14:27:26.383805 2748 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:27:26.385200 kubelet[2748]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:27:26.385200 kubelet[2748]: rm /hostbin/cilium-mount Dec 13 14:27:26.385438 kubelet[2748]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-pmb7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6drls_kube-system(bf3e557e-261c-4251-8abd-5943e1ac02ca): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:27:26.387780 kubelet[2748]: E1213 14:27:26.387752 2748 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6drls" podUID="bf3e557e-261c-4251-8abd-5943e1ac02ca" Dec 13 14:27:26.459416 env[1731]: time="2024-12-13T14:27:26.459148607Z" level=info msg="StopPodSandbox for \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\"" Dec 13 14:27:26.459416 env[1731]: time="2024-12-13T14:27:26.459214149Z" level=info msg="Container to stop \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:27:26.471268 systemd[1]: cri-containerd-e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317.scope: Deactivated successfully. Dec 13 14:27:26.521745 env[1731]: time="2024-12-13T14:27:26.521691114Z" level=info msg="shim disconnected" id=e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317 Dec 13 14:27:26.521745 env[1731]: time="2024-12-13T14:27:26.521742930Z" level=warning msg="cleaning up after shim disconnected" id=e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317 namespace=k8s.io Dec 13 14:27:26.521745 env[1731]: time="2024-12-13T14:27:26.521754878Z" level=info msg="cleaning up dead shim" Dec 13 14:27:26.531668 env[1731]: time="2024-12-13T14:27:26.531628135Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4626 runtime=io.containerd.runc.v2\n" Dec 13 14:27:26.532159 env[1731]: time="2024-12-13T14:27:26.532117743Z" level=info msg="TearDown network for sandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" successfully" Dec 13 14:27:26.532159 env[1731]: time="2024-12-13T14:27:26.532146434Z" level=info msg="StopPodSandbox for \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" returns successfully" Dec 13 14:27:26.555994 sshd[4605]: Accepted publickey for core from 139.178.89.65 port 37056 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:26.558901 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:26.580378 systemd[1]: Started session-29.scope. Dec 13 14:27:26.582533 systemd-logind[1722]: New session 29 of user core. Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685064 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-lib-modules\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685135 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-cgroup\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685226 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-config-path\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685269 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-net\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685297 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-xtables-lock\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685320 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-etc-cni-netd\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685397 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-kernel\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685450 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-ipsec-secrets\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685503 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-bpf-maps\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685556 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmb7r\" (UniqueName: \"kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-kube-api-access-pmb7r\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685590 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-clustermesh-secrets\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685637 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-hostproc\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685742 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cni-path\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685831 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-run\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.685863 2748 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-hubble-tls\") pod \"bf3e557e-261c-4251-8abd-5943e1ac02ca\" (UID: \"bf3e557e-261c-4251-8abd-5943e1ac02ca\") " Dec 13 14:27:26.686440 kubelet[2748]: I1213 14:27:26.686234 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.690942 kubelet[2748]: I1213 14:27:26.689762 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.690942 kubelet[2748]: I1213 14:27:26.689649 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.690942 kubelet[2748]: I1213 14:27:26.689931 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.691762 kubelet[2748]: I1213 14:27:26.691737 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.697015 kubelet[2748]: I1213 14:27:26.691914 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.697015 kubelet[2748]: I1213 14:27:26.691940 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.697015 kubelet[2748]: I1213 14:27:26.691982 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.697015 kubelet[2748]: I1213 14:27:26.692000 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.697015 kubelet[2748]: I1213 14:27:26.692240 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:27:26.699920 kubelet[2748]: I1213 14:27:26.699854 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:27:26.702186 kubelet[2748]: I1213 14:27:26.702152 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:27:26.708102 kubelet[2748]: I1213 14:27:26.707974 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:26.710951 kubelet[2748]: I1213 14:27:26.710854 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:27:26.711534 kubelet[2748]: I1213 14:27:26.711506 2748 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-kube-api-access-pmb7r" (OuterVolumeSpecName: "kube-api-access-pmb7r") pod "bf3e557e-261c-4251-8abd-5943e1ac02ca" (UID: "bf3e557e-261c-4251-8abd-5943e1ac02ca"). InnerVolumeSpecName "kube-api-access-pmb7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:27:26.786171 kubelet[2748]: I1213 14:27:26.786135 2748 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-kernel\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.786380 kubelet[2748]: I1213 14:27:26.786366 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-ipsec-secrets\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.786512 kubelet[2748]: I1213 14:27:26.786502 2748 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-bpf-maps\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.786656 kubelet[2748]: I1213 14:27:26.786642 2748 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pmb7r\" (UniqueName: \"kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-kube-api-access-pmb7r\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.786757 kubelet[2748]: I1213 14:27:26.786747 2748 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf3e557e-261c-4251-8abd-5943e1ac02ca-clustermesh-secrets\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.786855 kubelet[2748]: I1213 14:27:26.786846 2748 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-hostproc\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.786958 kubelet[2748]: I1213 14:27:26.786950 2748 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cni-path\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787047 kubelet[2748]: I1213 14:27:26.787038 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-run\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787140 kubelet[2748]: I1213 14:27:26.787132 2748 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf3e557e-261c-4251-8abd-5943e1ac02ca-hubble-tls\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787229 kubelet[2748]: I1213 14:27:26.787221 2748 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-lib-modules\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787323 kubelet[2748]: I1213 14:27:26.787314 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-cgroup\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787425 kubelet[2748]: I1213 14:27:26.787417 2748 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf3e557e-261c-4251-8abd-5943e1ac02ca-cilium-config-path\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787618 kubelet[2748]: I1213 14:27:26.787608 2748 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-host-proc-sys-net\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787719 kubelet[2748]: I1213 14:27:26.787711 2748 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-xtables-lock\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.787821 kubelet[2748]: I1213 14:27:26.787813 2748 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf3e557e-261c-4251-8abd-5943e1ac02ca-etc-cni-netd\") on node \"ip-172-31-21-15\" DevicePath \"\"" Dec 13 14:27:26.788219 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317-shm.mount: Deactivated successfully. Dec 13 14:27:26.788581 systemd[1]: var-lib-kubelet-pods-bf3e557e\x2d261c\x2d4251\x2d8abd\x2d5943e1ac02ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmb7r.mount: Deactivated successfully. Dec 13 14:27:26.788681 systemd[1]: var-lib-kubelet-pods-bf3e557e\x2d261c\x2d4251\x2d8abd\x2d5943e1ac02ca-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:27:26.788759 systemd[1]: var-lib-kubelet-pods-bf3e557e\x2d261c\x2d4251\x2d8abd\x2d5943e1ac02ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:27:26.788840 systemd[1]: var-lib-kubelet-pods-bf3e557e\x2d261c\x2d4251\x2d8abd\x2d5943e1ac02ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:27:27.462556 kubelet[2748]: I1213 14:27:27.462528 2748 scope.go:117] "RemoveContainer" containerID="3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770" Dec 13 14:27:27.470316 env[1731]: time="2024-12-13T14:27:27.469366368Z" level=info msg="RemoveContainer for \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\"" Dec 13 14:27:27.475817 systemd[1]: Removed slice kubepods-burstable-podbf3e557e_261c_4251_8abd_5943e1ac02ca.slice. Dec 13 14:27:27.482921 env[1731]: time="2024-12-13T14:27:27.482812332Z" level=info msg="RemoveContainer for \"3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770\" returns successfully" Dec 13 14:27:27.549351 kubelet[2748]: I1213 14:27:27.549312 2748 topology_manager.go:215] "Topology Admit Handler" podUID="2473674c-2155-47e4-882a-07235d6383ba" podNamespace="kube-system" podName="cilium-x548j" Dec 13 14:27:27.549563 kubelet[2748]: E1213 14:27:27.549382 2748 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf3e557e-261c-4251-8abd-5943e1ac02ca" containerName="mount-cgroup" Dec 13 14:27:27.549563 kubelet[2748]: I1213 14:27:27.549414 2748 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3e557e-261c-4251-8abd-5943e1ac02ca" containerName="mount-cgroup" Dec 13 14:27:27.608116 systemd[1]: Created slice kubepods-burstable-pod2473674c_2155_47e4_882a_07235d6383ba.slice. Dec 13 14:27:27.697689 kubelet[2748]: I1213 14:27:27.697647 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-cilium-cgroup\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.698028 kubelet[2748]: I1213 14:27:27.698012 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2473674c-2155-47e4-882a-07235d6383ba-cilium-ipsec-secrets\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.698421 kubelet[2748]: I1213 14:27:27.698211 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-host-proc-sys-kernel\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.698648 kubelet[2748]: I1213 14:27:27.698635 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-cilium-run\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.698823 kubelet[2748]: I1213 14:27:27.698792 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-lib-modules\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.699007 kubelet[2748]: I1213 14:27:27.698976 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2473674c-2155-47e4-882a-07235d6383ba-clustermesh-secrets\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.699373 kubelet[2748]: I1213 14:27:27.699156 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-cni-path\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.699606 kubelet[2748]: I1213 14:27:27.699594 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-xtables-lock\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.699810 kubelet[2748]: I1213 14:27:27.699798 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2473674c-2155-47e4-882a-07235d6383ba-hubble-tls\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.699944 kubelet[2748]: I1213 14:27:27.699934 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbttm\" (UniqueName: \"kubernetes.io/projected/2473674c-2155-47e4-882a-07235d6383ba-kube-api-access-zbttm\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.700136 kubelet[2748]: I1213 14:27:27.700125 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-bpf-maps\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.700262 kubelet[2748]: I1213 14:27:27.700253 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-hostproc\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.700414 kubelet[2748]: I1213 14:27:27.700396 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-etc-cni-netd\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.700612 kubelet[2748]: I1213 14:27:27.700600 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2473674c-2155-47e4-882a-07235d6383ba-cilium-config-path\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.700812 kubelet[2748]: I1213 14:27:27.700801 2748 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2473674c-2155-47e4-882a-07235d6383ba-host-proc-sys-net\") pod \"cilium-x548j\" (UID: \"2473674c-2155-47e4-882a-07235d6383ba\") " pod="kube-system/cilium-x548j" Dec 13 14:27:27.793629 kubelet[2748]: I1213 14:27:27.793594 2748 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bf3e557e-261c-4251-8abd-5943e1ac02ca" path="/var/lib/kubelet/pods/bf3e557e-261c-4251-8abd-5943e1ac02ca/volumes" Dec 13 14:27:27.920623 env[1731]: time="2024-12-13T14:27:27.920543841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x548j,Uid:2473674c-2155-47e4-882a-07235d6383ba,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:27.969655 env[1731]: time="2024-12-13T14:27:27.969576985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:27.969860 env[1731]: time="2024-12-13T14:27:27.969619832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:27.969860 env[1731]: time="2024-12-13T14:27:27.969635603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:27.969860 env[1731]: time="2024-12-13T14:27:27.969788070Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c pid=4661 runtime=io.containerd.runc.v2 Dec 13 14:27:27.994071 systemd[1]: Started cri-containerd-1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c.scope. Dec 13 14:27:28.036444 env[1731]: time="2024-12-13T14:27:28.036402831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x548j,Uid:2473674c-2155-47e4-882a-07235d6383ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\"" Dec 13 14:27:28.044808 env[1731]: time="2024-12-13T14:27:28.044107767Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:28.071659 env[1731]: time="2024-12-13T14:27:28.071601728Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb\"" Dec 13 14:27:28.073057 env[1731]: time="2024-12-13T14:27:28.072792459Z" level=info msg="StartContainer for \"edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb\"" Dec 13 14:27:28.107528 systemd[1]: Started cri-containerd-edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb.scope. Dec 13 14:27:28.152331 env[1731]: time="2024-12-13T14:27:28.152278864Z" level=info msg="StartContainer for \"edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb\" returns successfully" Dec 13 14:27:28.228719 systemd[1]: cri-containerd-edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb.scope: Deactivated successfully. Dec 13 14:27:28.287822 env[1731]: time="2024-12-13T14:27:28.287766464Z" level=info msg="shim disconnected" id=edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb Dec 13 14:27:28.287822 env[1731]: time="2024-12-13T14:27:28.287822428Z" level=warning msg="cleaning up after shim disconnected" id=edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb namespace=k8s.io Dec 13 14:27:28.288202 env[1731]: time="2024-12-13T14:27:28.287834036Z" level=info msg="cleaning up dead shim" Dec 13 14:27:28.299589 env[1731]: time="2024-12-13T14:27:28.299444681Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4746 runtime=io.containerd.runc.v2\n" Dec 13 14:27:28.483366 env[1731]: time="2024-12-13T14:27:28.483306021Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:27:28.537805 env[1731]: time="2024-12-13T14:27:28.535240932Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734\"" Dec 13 14:27:28.540483 env[1731]: time="2024-12-13T14:27:28.540065586Z" level=info msg="StartContainer for \"bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734\"" Dec 13 14:27:28.573991 systemd[1]: Started cri-containerd-bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734.scope. Dec 13 14:27:28.640009 env[1731]: time="2024-12-13T14:27:28.637486505Z" level=info msg="StartContainer for \"bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734\" returns successfully" Dec 13 14:27:28.685869 systemd[1]: cri-containerd-bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734.scope: Deactivated successfully. Dec 13 14:27:28.756592 env[1731]: time="2024-12-13T14:27:28.756539664Z" level=info msg="shim disconnected" id=bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734 Dec 13 14:27:28.756592 env[1731]: time="2024-12-13T14:27:28.756595213Z" level=warning msg="cleaning up after shim disconnected" id=bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734 namespace=k8s.io Dec 13 14:27:28.757043 env[1731]: time="2024-12-13T14:27:28.756606580Z" level=info msg="cleaning up dead shim" Dec 13 14:27:28.788084 env[1731]: time="2024-12-13T14:27:28.788036276Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4808 runtime=io.containerd.runc.v2\n" Dec 13 14:27:28.790663 kubelet[2748]: E1213 14:27:28.790270 2748 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xmxqs" podUID="cf1b1412-1eb1-4404-8463-f11d4987f414" Dec 13 14:27:29.462323 kubelet[2748]: W1213 14:27:29.462275 2748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf3e557e_261c_4251_8abd_5943e1ac02ca.slice/cri-containerd-3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770.scope WatchSource:0}: container "3efc3688fd3227633320b37739415853eda9b9b8d9a13ee64ed1a3db9e618770" in namespace "k8s.io": not found Dec 13 14:27:29.486823 env[1731]: time="2024-12-13T14:27:29.486776131Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:27:29.522820 env[1731]: time="2024-12-13T14:27:29.522764954Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033\"" Dec 13 14:27:29.523700 env[1731]: time="2024-12-13T14:27:29.523666123Z" level=info msg="StartContainer for \"9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033\"" Dec 13 14:27:29.588332 systemd[1]: Started cri-containerd-9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033.scope. Dec 13 14:27:29.669418 env[1731]: time="2024-12-13T14:27:29.669361759Z" level=info msg="StartContainer for \"9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033\" returns successfully" Dec 13 14:27:29.721874 systemd[1]: cri-containerd-9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033.scope: Deactivated successfully. Dec 13 14:27:29.769351 env[1731]: time="2024-12-13T14:27:29.769296238Z" level=info msg="shim disconnected" id=9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033 Dec 13 14:27:29.769351 env[1731]: time="2024-12-13T14:27:29.769348532Z" level=warning msg="cleaning up after shim disconnected" id=9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033 namespace=k8s.io Dec 13 14:27:29.769351 env[1731]: time="2024-12-13T14:27:29.769360509Z" level=info msg="cleaning up dead shim" Dec 13 14:27:29.780250 env[1731]: time="2024-12-13T14:27:29.780200262Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4871 runtime=io.containerd.runc.v2\n" Dec 13 14:27:29.818262 systemd[1]: run-containerd-runc-k8s.io-9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033-runc.wrrMFu.mount: Deactivated successfully. Dec 13 14:27:29.818389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033-rootfs.mount: Deactivated successfully. Dec 13 14:27:29.979804 kubelet[2748]: E1213 14:27:29.976787 2748 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:27:30.489486 env[1731]: time="2024-12-13T14:27:30.488595953Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:27:30.519740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447541300.mount: Deactivated successfully. Dec 13 14:27:30.530574 env[1731]: time="2024-12-13T14:27:30.530516408Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421\"" Dec 13 14:27:30.532997 env[1731]: time="2024-12-13T14:27:30.531603643Z" level=info msg="StartContainer for \"43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421\"" Dec 13 14:27:30.553220 systemd[1]: Started cri-containerd-43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421.scope. Dec 13 14:27:30.587255 systemd[1]: cri-containerd-43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421.scope: Deactivated successfully. Dec 13 14:27:30.591015 env[1731]: time="2024-12-13T14:27:30.590962766Z" level=info msg="StartContainer for \"43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421\" returns successfully" Dec 13 14:27:30.634238 env[1731]: time="2024-12-13T14:27:30.634180887Z" level=info msg="shim disconnected" id=43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421 Dec 13 14:27:30.634238 env[1731]: time="2024-12-13T14:27:30.634238451Z" level=warning msg="cleaning up after shim disconnected" id=43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421 namespace=k8s.io Dec 13 14:27:30.634602 env[1731]: time="2024-12-13T14:27:30.634251664Z" level=info msg="cleaning up dead shim" Dec 13 14:27:30.644735 env[1731]: time="2024-12-13T14:27:30.644680268Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4931 runtime=io.containerd.runc.v2\n" Dec 13 14:27:30.790216 kubelet[2748]: E1213 14:27:30.790172 2748 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xmxqs" podUID="cf1b1412-1eb1-4404-8463-f11d4987f414" Dec 13 14:27:31.515535 env[1731]: time="2024-12-13T14:27:31.514367796Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:27:31.588743 env[1731]: time="2024-12-13T14:27:31.588690537Z" level=info msg="CreateContainer within sandbox \"1a92c63126755aa8d3954cd7a8166a7d36ab0a909c0fbba30c723590b067741c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50\"" Dec 13 14:27:31.592305 env[1731]: time="2024-12-13T14:27:31.589791456Z" level=info msg="StartContainer for \"e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50\"" Dec 13 14:27:31.622891 systemd[1]: Started cri-containerd-e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50.scope. Dec 13 14:27:31.686842 env[1731]: time="2024-12-13T14:27:31.686789743Z" level=info msg="StartContainer for \"e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50\" returns successfully" Dec 13 14:27:31.818063 systemd[1]: run-containerd-runc-k8s.io-e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50-runc.wnRwGI.mount: Deactivated successfully. Dec 13 14:27:32.480533 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:27:32.549413 kubelet[2748]: I1213 14:27:32.549363 2748 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x548j" podStartSLOduration=5.549180177 podStartE2EDuration="5.549180177s" podCreationTimestamp="2024-12-13 14:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:32.545172713 +0000 UTC m=+143.092515284" watchObservedRunningTime="2024-12-13 14:27:32.549180177 +0000 UTC m=+143.096522754" Dec 13 14:27:32.580992 kubelet[2748]: W1213 14:27:32.580941 2748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2473674c_2155_47e4_882a_07235d6383ba.slice/cri-containerd-edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb.scope WatchSource:0}: task edf71144663743b47ea8cf7db9a342c2387278c62c682a8fdee96af04ccb70fb not found: not found Dec 13 14:27:32.714742 kubelet[2748]: I1213 14:27:32.714714 2748 setters.go:568] "Node became not ready" node="ip-172-31-21-15" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:27:32Z","lastTransitionTime":"2024-12-13T14:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:27:32.790131 kubelet[2748]: E1213 14:27:32.790097 2748 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xmxqs" podUID="cf1b1412-1eb1-4404-8463-f11d4987f414" Dec 13 14:27:33.070361 systemd[1]: run-containerd-runc-k8s.io-e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50-runc.Xo5mhq.mount: Deactivated successfully. Dec 13 14:27:34.790766 kubelet[2748]: E1213 14:27:34.790717 2748 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-xmxqs" podUID="cf1b1412-1eb1-4404-8463-f11d4987f414" Dec 13 14:27:35.340654 systemd[1]: run-containerd-runc-k8s.io-e86a53fbcf16a9c12cb0beb65bf261ad0b3c215bfbc7f5ff5f8447d2036f2a50-runc.PXw8HA.mount: Deactivated successfully. Dec 13 14:27:35.694240 kubelet[2748]: W1213 14:27:35.693974 2748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2473674c_2155_47e4_882a_07235d6383ba.slice/cri-containerd-bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734.scope WatchSource:0}: task bdfba36e471b508438c5f0e7f753b0f300dac6b1eabf42c7853653491b428734 not found: not found Dec 13 14:27:36.378811 (udev-worker)[5524]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:36.381094 systemd-networkd[1464]: lxc_health: Link UP Dec 13 14:27:36.393529 (udev-worker)[5525]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:36.446104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:27:36.445327 systemd-networkd[1464]: lxc_health: Gained carrier Dec 13 14:27:37.751666 systemd-networkd[1464]: lxc_health: Gained IPv6LL Dec 13 14:27:38.807374 kubelet[2748]: W1213 14:27:38.807326 2748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2473674c_2155_47e4_882a_07235d6383ba.slice/cri-containerd-9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033.scope WatchSource:0}: task 9351e58c85769fdb2e07c31bed5f19ec4478d39de056359d405ea3dfd7948033 not found: not found Dec 13 14:27:41.923629 kubelet[2748]: W1213 14:27:41.923576 2748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2473674c_2155_47e4_882a_07235d6383ba.slice/cri-containerd-43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421.scope WatchSource:0}: task 43a3c83f4bec2e3197c8be83fb9c7656a986adfc7108a6e367d5fe43e1e79421 not found: not found Dec 13 14:27:42.801969 sshd[4605]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:42.809665 systemd-logind[1722]: Session 29 logged out. Waiting for processes to exit. Dec 13 14:27:42.812351 systemd[1]: sshd@28-172.31.21.15:22-139.178.89.65:37056.service: Deactivated successfully. Dec 13 14:27:42.813386 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 14:27:42.817129 systemd-logind[1722]: Removed session 29. Dec 13 14:27:47.138936 update_engine[1723]: I1213 14:27:47.138873 1723 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 14:27:47.139512 update_engine[1723]: I1213 14:27:47.138954 1723 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 14:27:47.141696 update_engine[1723]: I1213 14:27:47.141659 1723 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 14:27:47.142793 update_engine[1723]: I1213 14:27:47.142716 1723 omaha_request_params.cc:62] Current group set to lts Dec 13 14:27:47.147007 update_engine[1723]: I1213 14:27:47.146867 1723 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 14:27:47.147007 update_engine[1723]: I1213 14:27:47.146890 1723 update_attempter.cc:643] Scheduling an action processor start. Dec 13 14:27:47.147007 update_engine[1723]: I1213 14:27:47.146914 1723 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:27:47.157579 update_engine[1723]: I1213 14:27:47.157526 1723 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 14:27:47.157806 update_engine[1723]: I1213 14:27:47.157696 1723 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 14:27:47.157806 update_engine[1723]: I1213 14:27:47.157706 1723 omaha_request_action.cc:271] Request: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: Dec 13 14:27:47.157806 update_engine[1723]: I1213 14:27:47.157711 1723 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:27:47.186117 update_engine[1723]: I1213 14:27:47.186053 1723 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:27:47.191884 update_engine[1723]: I1213 14:27:47.191831 1723 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:27:47.197562 update_engine[1723]: E1213 14:27:47.197512 1723 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:27:47.198281 update_engine[1723]: I1213 14:27:47.197971 1723 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 14:27:47.212760 locksmithd[1792]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 14:27:57.137560 update_engine[1723]: I1213 14:27:57.137507 1723 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:27:57.138017 update_engine[1723]: I1213 14:27:57.137835 1723 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:27:57.138093 update_engine[1723]: I1213 14:27:57.138072 1723 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:27:57.139569 update_engine[1723]: E1213 14:27:57.139534 1723 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:27:57.139711 update_engine[1723]: I1213 14:27:57.139657 1723 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 14:28:07.137640 update_engine[1723]: I1213 14:28:07.137585 1723 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:28:07.138920 update_engine[1723]: I1213 14:28:07.138238 1723 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:28:07.139045 update_engine[1723]: I1213 14:28:07.138962 1723 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:28:07.139744 update_engine[1723]: E1213 14:28:07.139719 1723 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:28:07.139841 update_engine[1723]: I1213 14:28:07.139823 1723 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 14:28:09.760832 env[1731]: time="2024-12-13T14:28:09.760782792Z" level=info msg="StopPodSandbox for \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\"" Dec 13 14:28:09.761351 env[1731]: time="2024-12-13T14:28:09.760896225Z" level=info msg="TearDown network for sandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" successfully" Dec 13 14:28:09.761351 env[1731]: time="2024-12-13T14:28:09.760943448Z" level=info msg="StopPodSandbox for \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" returns successfully" Dec 13 14:28:09.769492 env[1731]: time="2024-12-13T14:28:09.761892602Z" level=info msg="RemovePodSandbox for \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\"" Dec 13 14:28:09.769492 env[1731]: time="2024-12-13T14:28:09.763438435Z" level=info msg="Forcibly stopping sandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\"" Dec 13 14:28:09.769492 env[1731]: time="2024-12-13T14:28:09.764252146Z" level=info msg="TearDown network for sandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" successfully" Dec 13 14:28:09.774549 env[1731]: time="2024-12-13T14:28:09.774496544Z" level=info msg="RemovePodSandbox \"e8cc49387e8465f52f29cd6737c4636dcb55e91abafbd8d788665eda8e24a317\" returns successfully" Dec 13 14:28:09.775391 env[1731]: time="2024-12-13T14:28:09.775353674Z" level=info msg="StopPodSandbox for \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\"" Dec 13 14:28:09.775546 env[1731]: time="2024-12-13T14:28:09.775489035Z" level=info msg="TearDown network for sandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" successfully" Dec 13 14:28:09.775546 env[1731]: time="2024-12-13T14:28:09.775534952Z" level=info msg="StopPodSandbox for \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" returns successfully" Dec 13 14:28:09.775946 env[1731]: time="2024-12-13T14:28:09.775919779Z" level=info msg="RemovePodSandbox for \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\"" Dec 13 14:28:09.776091 env[1731]: time="2024-12-13T14:28:09.776048203Z" level=info msg="Forcibly stopping sandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\"" Dec 13 14:28:09.776220 env[1731]: time="2024-12-13T14:28:09.776193027Z" level=info msg="TearDown network for sandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" successfully" Dec 13 14:28:09.782102 env[1731]: time="2024-12-13T14:28:09.782030274Z" level=info msg="RemovePodSandbox \"1b8941eea5e534cabeb88d5817130ef4cecd0edf49167d22c4380b2a535b4e45\" returns successfully" Dec 13 14:28:09.782630 env[1731]: time="2024-12-13T14:28:09.782593017Z" level=info msg="StopPodSandbox for \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\"" Dec 13 14:28:09.782922 env[1731]: time="2024-12-13T14:28:09.782760728Z" level=info msg="TearDown network for sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" successfully" Dec 13 14:28:09.783002 env[1731]: time="2024-12-13T14:28:09.782920745Z" level=info msg="StopPodSandbox for \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" returns successfully" Dec 13 14:28:09.783353 env[1731]: time="2024-12-13T14:28:09.783323088Z" level=info msg="RemovePodSandbox for \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\"" Dec 13 14:28:09.783514 env[1731]: time="2024-12-13T14:28:09.783355815Z" level=info msg="Forcibly stopping sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\"" Dec 13 14:28:09.783582 env[1731]: time="2024-12-13T14:28:09.783535608Z" level=info msg="TearDown network for sandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" successfully" Dec 13 14:28:09.789834 env[1731]: time="2024-12-13T14:28:09.789550615Z" level=info msg="RemovePodSandbox \"2e0c2f8b34310b089517ff98125a080d885118879c1909f207ae6610b7fec1ba\" returns successfully" Dec 13 14:28:17.137990 update_engine[1723]: I1213 14:28:17.137827 1723 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:28:17.138646 update_engine[1723]: I1213 14:28:17.138373 1723 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:28:17.138972 update_engine[1723]: I1213 14:28:17.138645 1723 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:28:17.139302 update_engine[1723]: E1213 14:28:17.139273 1723 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:28:17.139480 update_engine[1723]: I1213 14:28:17.139374 1723 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:28:17.139480 update_engine[1723]: I1213 14:28:17.139382 1723 omaha_request_action.cc:621] Omaha request response: Dec 13 14:28:17.139589 update_engine[1723]: E1213 14:28:17.139487 1723 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 14:28:17.139589 update_engine[1723]: I1213 14:28:17.139503 1723 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 14:28:17.139589 update_engine[1723]: I1213 14:28:17.139508 1723 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:28:17.139589 update_engine[1723]: I1213 14:28:17.139513 1723 update_attempter.cc:306] Processing Done. Dec 13 14:28:17.139589 update_engine[1723]: E1213 14:28:17.139528 1723 update_attempter.cc:619] Update failed. Dec 13 14:28:17.139589 update_engine[1723]: I1213 14:28:17.139535 1723 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 14:28:17.139589 update_engine[1723]: I1213 14:28:17.139539 1723 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 14:28:17.139589 update_engine[1723]: I1213 14:28:17.139545 1723 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 14:28:17.140157 update_engine[1723]: I1213 14:28:17.139627 1723 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:28:17.140157 update_engine[1723]: I1213 14:28:17.139657 1723 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 14:28:17.140157 update_engine[1723]: I1213 14:28:17.139663 1723 omaha_request_action.cc:271] Request: Dec 13 14:28:17.140157 update_engine[1723]: Dec 13 14:28:17.140157 update_engine[1723]: Dec 13 14:28:17.140157 update_engine[1723]: Dec 13 14:28:17.140157 update_engine[1723]: Dec 13 14:28:17.140157 update_engine[1723]: Dec 13 14:28:17.140157 update_engine[1723]: Dec 13 14:28:17.140157 update_engine[1723]: I1213 14:28:17.139669 1723 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:28:17.140157 update_engine[1723]: I1213 14:28:17.140065 1723 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:28:17.140804 update_engine[1723]: I1213 14:28:17.140343 1723 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:28:17.141102 update_engine[1723]: E1213 14:28:17.140992 1723 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:28:17.144960 update_engine[1723]: I1213 14:28:17.144915 1723 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:28:17.144960 update_engine[1723]: I1213 14:28:17.144937 1723 omaha_request_action.cc:621] Omaha request response: Dec 13 14:28:17.144960 update_engine[1723]: I1213 14:28:17.144943 1723 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:28:17.144960 update_engine[1723]: I1213 14:28:17.144948 1723 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:28:17.144960 update_engine[1723]: I1213 14:28:17.144953 1723 update_attempter.cc:306] Processing Done. Dec 13 14:28:17.144960 update_engine[1723]: I1213 14:28:17.144958 1723 update_attempter.cc:310] Error event sent. Dec 13 14:28:17.145220 update_engine[1723]: I1213 14:28:17.144973 1723 update_check_scheduler.cc:74] Next update check in 40m22s Dec 13 14:28:17.146874 locksmithd[1792]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 14:28:17.146874 locksmithd[1792]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 14:28:19.335507 systemd[1]: cri-containerd-6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253.scope: Deactivated successfully. Dec 13 14:28:19.336195 systemd[1]: cri-containerd-6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253.scope: Consumed 3.347s CPU time. Dec 13 14:28:19.373917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253-rootfs.mount: Deactivated successfully. Dec 13 14:28:19.393865 env[1731]: time="2024-12-13T14:28:19.393805688Z" level=info msg="shim disconnected" id=6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253 Dec 13 14:28:19.393865 env[1731]: time="2024-12-13T14:28:19.393861251Z" level=warning msg="cleaning up after shim disconnected" id=6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253 namespace=k8s.io Dec 13 14:28:19.394642 env[1731]: time="2024-12-13T14:28:19.393876234Z" level=info msg="cleaning up dead shim" Dec 13 14:28:19.405847 env[1731]: time="2024-12-13T14:28:19.405801703Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5639 runtime=io.containerd.runc.v2\n" Dec 13 14:28:19.647049 kubelet[2748]: I1213 14:28:19.646074 2748 scope.go:117] "RemoveContainer" containerID="6ae7b26be156128644d3bf50ab925e2d0e44e1884221de5455eed5a48d153253" Dec 13 14:28:19.652156 env[1731]: time="2024-12-13T14:28:19.652065366Z" level=info msg="CreateContainer within sandbox \"efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:28:19.687508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220709765.mount: Deactivated successfully. Dec 13 14:28:19.691830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868070580.mount: Deactivated successfully. Dec 13 14:28:19.692924 env[1731]: time="2024-12-13T14:28:19.692878947Z" level=info msg="CreateContainer within sandbox \"efd92cc7761a1d2be3e61fc6fee18d537101825fc344c995262987a97d61b575\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6d6fcea0caf32d40bcd4a1c0e20676e27c9d49aebd4be10a62f54a00744e79df\"" Dec 13 14:28:19.694624 env[1731]: time="2024-12-13T14:28:19.694553285Z" level=info msg="StartContainer for \"6d6fcea0caf32d40bcd4a1c0e20676e27c9d49aebd4be10a62f54a00744e79df\"" Dec 13 14:28:19.741489 systemd[1]: Started cri-containerd-6d6fcea0caf32d40bcd4a1c0e20676e27c9d49aebd4be10a62f54a00744e79df.scope. Dec 13 14:28:19.827713 env[1731]: time="2024-12-13T14:28:19.827653050Z" level=info msg="StartContainer for \"6d6fcea0caf32d40bcd4a1c0e20676e27c9d49aebd4be10a62f54a00744e79df\" returns successfully" Dec 13 14:28:23.209863 kubelet[2748]: E1213 14:28:23.209824 2748 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-15?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:28:24.129583 systemd[1]: cri-containerd-db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49.scope: Deactivated successfully. Dec 13 14:28:24.130023 systemd[1]: cri-containerd-db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49.scope: Consumed 1.813s CPU time. Dec 13 14:28:24.171159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49-rootfs.mount: Deactivated successfully. Dec 13 14:28:24.206597 env[1731]: time="2024-12-13T14:28:24.206240443Z" level=info msg="shim disconnected" id=db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49 Dec 13 14:28:24.206597 env[1731]: time="2024-12-13T14:28:24.206598390Z" level=warning msg="cleaning up after shim disconnected" id=db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49 namespace=k8s.io Dec 13 14:28:24.208029 env[1731]: time="2024-12-13T14:28:24.206613530Z" level=info msg="cleaning up dead shim" Dec 13 14:28:24.242788 env[1731]: time="2024-12-13T14:28:24.240156824Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5699 runtime=io.containerd.runc.v2\n" Dec 13 14:28:24.676556 kubelet[2748]: I1213 14:28:24.676447 2748 scope.go:117] "RemoveContainer" containerID="db32da021a77c89c534320b8c903a2d4286b5a5549cf85faa2ecca84e6c75a49" Dec 13 14:28:24.679635 env[1731]: time="2024-12-13T14:28:24.679588506Z" level=info msg="CreateContainer within sandbox \"54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:28:24.707196 env[1731]: time="2024-12-13T14:28:24.707149061Z" level=info msg="CreateContainer within sandbox \"54ba16c5ae62d2d9944af80acf0939bcb2028e8699e5707a6434f175d47e318c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d62b3e2bd3f405cd541f1389dfe531cef18a5764b1c0bd2c01b5c6119d5e3a2d\"" Dec 13 14:28:24.707949 env[1731]: time="2024-12-13T14:28:24.707914968Z" level=info msg="StartContainer for \"d62b3e2bd3f405cd541f1389dfe531cef18a5764b1c0bd2c01b5c6119d5e3a2d\"" Dec 13 14:28:24.761689 systemd[1]: Started cri-containerd-d62b3e2bd3f405cd541f1389dfe531cef18a5764b1c0bd2c01b5c6119d5e3a2d.scope. Dec 13 14:28:24.834834 env[1731]: time="2024-12-13T14:28:24.834778224Z" level=info msg="StartContainer for \"d62b3e2bd3f405cd541f1389dfe531cef18a5764b1c0bd2c01b5c6119d5e3a2d\" returns successfully" Dec 13 14:28:25.175851 systemd[1]: run-containerd-runc-k8s.io-d62b3e2bd3f405cd541f1389dfe531cef18a5764b1c0bd2c01b5c6119d5e3a2d-runc.vc9rFc.mount: Deactivated successfully.